Google Cloud Next – Cloud SQL Demo

Alok Pareek, EVP of Products at Striim, and Codin Pora, Director of Partner Technology at Striim, provide a demo at Google Cloud Next SF, April 2019, on how the Striim platform can enable Google Cloud users to move real-time data from a variety of locations into their Cloud SQL environment

Unedited Transcript:

Great. So we finished the assessment. We decided that we want to do the migration, we want to move over Oracle to PostgreSQL. One way of doing it is with Striim.

Welcome everybody. My name is Alok and I’m from Striim. I sort of feel like a brain surgeon after the complexity has been 100. So I get into the picture and tell you how to do the heterogeneous migration from Oracle to Postgres. That’s my demo today. What I’m going to do is, let’s click to the next one. Okay. Just want to make sure my clicker’s working here. Let’s go to the next slide, Codin. Sorry. All right. Just a little bit before I get into the demo. So Striim is the next generation platform and we focus on three different solution categories. The categories are cloud adoption, hybrid cloud data integration, and in memory stream processing. Of course, the focus today is on cloud adoption and I’m  delighted to announce a partnership with Google on our cloud adoption, especially in the area of database migration.

So what I’m gonna focus today is on the Oracle to Cloud SQL Postgres migration. And especially as Ayir pointed out that downtime is a big problem, especially for mission critical applications. So you might have a number of different applications from your CRM, billing payments, core banking, etc. And these might not necessarily be able to take any kind of an outrage, but you still want to achieve, we want your initiatives for cloud modernization or for continuous operations. You might also have data on other clouds and you wanted to synchronize that with newer applications that you’re deploying on the Google cloud. So those are all the benefits that you gain. Of course, the critical problem here is the downtime part. So let’s get into the demo and I’m gonna actually show you how Striim helps you do that.

So I’m going to have Codin help me with the demo. This is the landing page of the Striim product. There are three parts here. There’s a dashboard space, there’s the apps, apps are the pipeline. So you’ll hear me interchangeably use apps and data pipelines. And then there’s also you can actually preview your sources before you get started. So we’re going to jump into the apps part of the demo. What you see here are prebuilt pipelines. Remember this is a zero downtime type of a scenario I’m talking about. This has two critical phases. There’s phase one, which is instantiation, but that’s where you make a one time copy of all of the data, which could be gigabytes, perhaps terabytes or it could be a lesser, and then followed by a catchup or synchronization phase, which we achieved through some very specialized readers that help you catch up the source and the target.

And ultimately the two sides are going to be in sync. So let’s step into the initialization phase. So this is a simple pipeline that shows a pipeline going from an on premise Oracle database into a Cloud SQL Postgres database. The way this was built was using a flow designer by choosing a number of a out of the box components where you can figure your sources. For example, in your targets. And you can also transform the data along the way because it may be interesting because the data formats might be different or you may want to realign some of the data as you’re going through. Let’s step into the configuration of just the Oracle database. So here’s where you configure all of the different properties. It’s pretty flexible in terms of how you want to move the data.

On the cloud side, here’s where you provide your service account, your connections into the PostgreSQL database. In this case, we are going to actually move two separate tables for the purposes of the demo and are going to attempt to do a live migration of a million records of the line items table as well as do some DML activity in phase two with a change data capture for the orders table. So let’s go ahead and deploy the application. This can run in the cloud or on prem, it’s up to you. It’s pretty flexible. And as we deployed you can see that there’s actually a deliberate mismatch that we introduced. You heard us talk about some of the assessment checks and so forth. So it’s important before you actually do the migration that the sources and the target schema, etc. sort of add up. They are compatible, there’s no exceptions, et Cetera.

So here it did detect that there’s one incompatible data type. So we’ll go ahead and just skip this specific one. We have according to this actually fixed this in an another flow. So we’re going to jump into that initial load flow. So let’s go ahead and deploy that. And once we deploy it, it’s ready to get started. We want to check  whether the tables are already empty. So in this case, in the line item table, you can see that the account is zero, and now we’re going to go ahead and run the application. That’s the one that actually goes to your critical database, which is Oracle on prem starts to go ahead and you can run that current start the application. We’ll also a preview the data along the way.

You can see now the data is beginning to flow. In this case, we have a million records in the line items table and that sort of keeps going and you can also monitor the application progress. You can see that so far we have 400,000 input. The output input obviously are changing. So this is where a lot of the magic is happening. So things like batch optimization, things like parallelism, things like event guarantees, things like commit semantics. All of the stuff is taking place behind the scenes. So it will be transactionally consistent when you actually enable a complete the migration. So now I think we’re done. Why don’t we take a look at the PostgreSQL database? Make sure that and there you go. So you’re a million records table was just, just moved pretty quickly until the performance is very impressive.

What I wanna do is remember this could be gigabytes or terabytes of data. What we wanna do is during this time all of the data that is actually changing in the database. So this is your actual activity that is being moved using a technique called change data capture. So Striim allows you to actually have a specialized reader and in this case, lets you tap into that. So here’s where the reader properties are. In the interest of time we can go a little faster. Same configuration for the target obviously. And in this case now what we do is we take activity from the Redo log of the Oracle database and we are going to effectively replay that on the target side to catch them up, thereby avoiding any kind of an outrage to the production database.

So again, in this case, we have a mismatch here. It’s more of a precision mismatch. In this case, I’ll ignore it because I already know that all my data values adhere to the target site schema. So go ahead and ignore that and we want to actually go ahead and move the table here. Again, the preview option is selected and let’s go ahead and start it. Codin’s going to generate some DML activity using a separate simulator. This is where you’re going to see some inserts, updates and deletes. And you can see that he’s running a number of DMLS operations against the orders table. And then with a stream you can actually already see that. Now not only are we capturing the data, but also the metadata. So these are things from the Oracle database, like the transaction id, the system commit number, all of the meditators, a table name, the operation type in sort of the delete.

You can filter on stuff like this. And then you can actually go finally log into the orders table and see that there’s in fact a data present in this table. This was empty beforehand. So this is sort of a combined technique of instantiation followed by change data capture and application that allows you to, at your own leisure, move your on-prem database into the cloud database. You can actually go ahead and test both sides while the migration is ongoing. It may take you a day, a week, or potentially months, depending on the size of the database. But the key benefits are that there is no outrage and you can keep your operations continuous for your mission critical applications.