Skip to main content

Snowflake continuous real-time replication using CDC

Snowflake Reader reads from Snowflake using change data capture, and provides real-time continuous replication after initial load. Snowflake Reader can read data manipulation language (DML)  and limited DDL changes made to tables, such as ALTER TABLE ADD Column and ALTER TABLE DROP Column. Snowflake Reader identifies the changes made to each table by periodically querying the table using a CHANGES clause, with incrementing time intervals for each subsequent poll.

Typical use cases for reading from Snowflake include:

Snowflake Reader summary

APIs used/data supported

Snowflake Reader uses the Snowflake JDBC driver snowflake-jdbc-3.17.0.jar, which is bundled with Striim.

Data supported

Tables (Views are not supported)

Supported targets

All targets that support WAEvent input. This includes all database, data warehouse, and file system targets.

Security and authentication

  • Key pair

  • OAuth 2.0

  • Username and password

Operations / modes supported

Data can be read from the previous 24 hours (or longer if Snowflake's Time Travel default retention period is increased).

Captures inserts, updates and deletes.

Schema management

Snowflake Reader does not support initial schema creation at the target. Use Database Reader to create the initial schema.Database Reader

Snowflake Reader supports schema evolution for selected schema changes. See Handling schema evolution.Handling schema evolution

Resilience / recovery

Configurable automatic retries.

Supports recovery with at-least once processing. See Recovering applications.

Performance

The SnowflakeConfiguration property group is used to specify the ThreadPoolSize, EnableQuickCatchUp, and YieldAfter, properties that allow you to tune the performance of the Snowflake Reader and the load it places on Snowflake. The following considerations apply to configuring ThreadPoolSize:

  • ThreadPoolSize < Table Count – the average read latency will be relatively higher, the overall load on Snowflake will be lower.

  • ThreadPoolSize = Table Count – Snowflake reader achieves the optimal latency. This configuration can overload source Snowflake when the number of tables I high leading to performance bottlenecks.

  • ThreadPoolSize > Table Count – not recommended, and may result in unused resources in Snowflake Reader.

For more information, see Snowflake Reader operational considerations.

Programmability

  • Striim TQL

  • Striim Flow Designer

  • Wizards

Metrics and auditing

Key metrics available through Striim monitoring

Key considerations and limitations

  • To query the change data from Snowflake using SnowflakeReader, change tracking must be enabled on the source table.

  • Private Link Support – Striim supports both AWS and Azure Private link for Secure Connectivity. Note: Business Critical or higher edition is required for enabling Private Link in Snowflake.

  • SnowflakeReader does not currently support external tables.

  • Change data availability limitation: Change data is only available for the duration of the Time Travel retention period (default is 1 day). If an application requests change data that is beyond the retention period, it will halt.

Data coverage

Snowflake CDC Reader supports all the Snowflake data types (see Summary of data types in the Snowflake doc) except for the Vector type.

Snowflake initial setup

Setup varies depending on what authentication type you choose. See Connection profiles.

The Snowflake account to be used by Striim must have the following privileges:

  • For all authentication types:

    • USAGE on the database and schema to be read

    • SELECT and OWNERSHIP on the tables to be read

  • For OAuth:

    • CREATE INTEGRATION on the account

The OWNERSHIP privilege gives Striim the ability to enable change tracking on the tables to be read. If you prefer not to grant OWNERSHIP to the account, you must enable change tracking manually by executing the command ALTER TABLE <tablename> SET CHANGE_TRACKING = TRUE; for each table to be read (see Explicitly Enable Change Tracking on the Underlying Tables).

Authentication

See Connection profiles for additional setup steps depending on which authentication type you choose.

Private Link

To use AWS PrivateLink, an ACCOUNTADMIN must execute the following (replace <AWS ID> and <federated token> with the appropriate ID and token for your environment)::

SELECT SYSTEM$AUTHORIZE_PRIVATELINK ('<AWS ID>',‘<federated token>')

To use Azure Private Link, an ACCOUNTADMIN must execute the following (replace <Azure subscription ID> with your subscription ID):

SELECT SYSTEM$AUTHORIZE_PRIVATELINK ('<Azure subscription ID>');

Configuring Snowflake

You must have appropriate permissions in Snowflake for completing these configurations.

Create a Snowflake Reader application

To start using the Snowflake Reader in Striim apps, you can create an application in the following ways:

Create a Snowflake Reader application using a wizard

In Striim, select Apps > Create New, enter Source: Snowflake in the search bar, click the kind of application you want to create, and follow the prompts. See Creating apps using wizards for more information.Creating apps using templates

Create a Snowflake Reader application using TQL

CREATE OR REPLACE APPLICATION demo_app1;

CREATE OR REPLACE SOURCE src USING Global.SnowflakeReader ( 
  adapterName: 'SnowflakeReader', 
  connectionProfileName: 'admin.snowflakekeypair', 
  tables: 'PUBLIC.%', 
  excludedTables: 'PUBLIC.SAMPLETABLE1', 
  pollingInterval: '60s', 
  startTimestamp: 'PUBLIC.SAMPLETABLE2:\'2023-09-28T03:02:56.287-07:00\';
    PUBLIC.SAMPLETABLE5:\'2023-09-29t04:15:26.539-07:00\'', 
  connectionRetryPolicy: 'initialRetryDelay=10, retryDelayMultiplier=1.0, maxRetryDelay=30,
    maxAttempts=3, totalTimeout=600', 
  SnowflakeConfiguration: 'ThreadPoolSize=1',
  CDDLCapture: 'true',
  CDDLAction: 'Process' ) 
OUTPUT TO s1;

CREATE TARGET tgt USING Global.SysOut ( 
  name: 'tgt' ) 
INPUT FROM s1;

END APPLICATION demo_app1;

Create a Snowflake Reader application using the Flow Designer

See Creating and modifying apps using the Flow Designer.Creating and modifying apps using the Flow Designer

Snowflake Reader programmer's reference

Snowflake Reader properties

property

type

default value

notes

Connection Profile Name

Enum

String

Snowflake Reader requires a connection profile. See Connection profiles.

Connection Retry Policy

String

initialRetryDelay=10, retryDelayMultiplier=1.0, maxRetryDelay=30, maxAttempts=3, totalTimeout=600

With the default setting, if a connection attempt is unsuccessful, the adapter will try again in 10 seconds (InitialRetryDelay=10). If that attempt is unsuccessful, every 10 seconds it will try again (InitialRetryDelay=10s multiplied by retryDelayMultiplier=1.0)until a total of three connection attempts have been made (maxAttempts=3), after which the adapter will halt and log an exception.

The adapter will halt when either maxAttempts or totalTimeout is reached.

InitialRetryDelay, maxRetryDelay, and totalTimeout may be specified in milliseconds (ms), seconds (s, the default), or minutes (m).

If retryDelayMultiplier is set to 1, connection will be attempted on the fixed interval set by InitialRetryDelay.

To disable connection retry, set maxAttempts=0.

Negative values are not supported.

Excluded Tables

String

Data for any tables specified here will not be returned. For example, if Tables uses a wildcard, data from any tables specified here will be omitted. Multiple table names (separated by semicolons) and wildcards may be used exactly as for Tables.

Polling Interval

String

60s

With the default value of 60s, Striim will check the Snowflake source for new data every 60 seconds. This may be specified as seconds or minutes, for example, 5m for five minutes.

Schema Evolution - CDDL Action

Enum

Process

See Handling schema evolution.Handling schema evolution

Schema Evolution - CDDL Capture

Boolean

False

See Handling schema evolution. Supports only ALTER TABLE ... ADD COLUMN and ALTER TABLE ... DROP COLUMN.Handling schema evolution

Snowflake Configuration

String

ThreadPoolSize=1, EnableQuickCatchUp=true, YieldAfter=0 (disabled by default)

You can configure the following properties for Snowflake Configuration:

  • ThreadPoolSize: controls the number of threads for concurrently capturing changes from Snowflake tables. Only one thread at a time can read from each table. For optimal performance this property should be equal to the number of tables.

  • EnableQuickCatchUp: when the reader is starting from the past, it tries to catch up to the current time by making frequent calls to the Snowflake endpoint. This behavior can have a high cost impact on Snowflake. If this property is set to true, the Snowflake endpoint is less frequently called, thereby reducing the cost, with increased performance and lower catch up time.

    This property is not exposed in the UI by default. You can add this property using the Add Property option under Snowflake Configuration.

  • YieldAfter: this property is beneficial when the ThreadPoolSize is less than the number of tables. In situations where some tables experience heavy traffic (one example is when a table is in a catchup phase while polling data from the past), tables with less traffic may not get a chance to process the polled results. This issue can be addressed by limiting each table's event processing to the count specified in YieldAfter. Once a table reaches its YieldAfter limit, the resources are yielded to the next table waiting in the queue.

    This property is not exposed in the UI by default. You can add this property using the Add Property option under Snowflake Configuration.

Start Timestamp

String

NOW

With the default value of NOW, reading will begin with data added after the application is started. Optionally specify a timestamp in the Snowflake instance's time zone to begin reading from an earlier point.

To set the timestamp for all tables in all schemas, use the format %.%:'<timestamp>'. You may set the timestamp for individual tables using the format <schema>.<table>:'<timestamp>' or for all tables in a schema using <schema>.<table>:'<timestamp>'. You may specify multiple timestamps by separating them with semicolons, for example, test_schema.table1:'<timestamp>'; public_schema.%:'<timestamp>'.

The Start Timestamp must be within the Time Travel range (by default 24 hours) or the application will halt. See Guides> Business Continuity & Data Recovery > Time Travel > Understanding & using Time Travel.

If a table has not had change tracking enabled (see Snowflake initial setup), Snowflake Reader will enable it and start reading that table from the time that change tracking was enabled rather than from the specified timestamp.

The supported time formats are:

YYYY-MM-DDThh:mmTZD
YYYY-MM-DDThh:mm:ssTZD
YYYY-MM-DDThh:mm:ss.sssTZD
  • YYYY = four-digit year

  • MM = two-digit month

  • DD = two-digit day

  • hh = two digits of hour

  • mm = two digits of minute

  • ss = two digits of second

  • sss = one to three digits representing a decimal fraction of a second

  • TZD = time zone designator (Z or -hh:mm or +hh:mm)

Tables

String

Specify the tables to be read in the format <schema>.<table>. (The database is specified as part of the Connection URL property in the connection profile.) Multiple tables can be specified using the % wildcard, for example, myschema.inc% to read all tables with names that start with inc, or myschema.% to read all tables in myschema, Specify multiple selections by separating them with semicolons, for example, schema1.%;schema2.%. Wildcards may not be used for schemas.

Do not modify this property when recovery is enabled for the application.

Reading from tables with the same name in different cases is not supported. Reading from a table having columns with the same name in different cases is not supported.

Snowflake Reader sample TQL

CREATE SOURCE SnowflakeReaderSource USING SnowflakeReader ( 
  connectionProfileName: 'admin.snowflakekeypair', 
  tables: 'PUBLIC.%', 
  excludedTables: 'PUBLIC.SAMPLETABLE1', 
  CDDLCapture: 'true') 
OUTPUT TO SnowflakeReaderOutputStream;

Snowflake Reader WAEvent fields

The output data type for Snowflake Reader is WAEvent. The elements are:

metadata: a map including:

  • TableName: the fully qualified name of the table on which the operation was performed

  • ChangeWindow_StartTimestamp: start time (using the Snowflake service's time zone) of the interval in which the data was committed in Snowflake

  • ColumnCount: the number of columns in the table when the change data was captured

  • Rowid: unique identifier for each event

  • OperationName: INSERT, UPDATE, DELETE, or the DDL operation (such as AlterColumns)

  • EventType: DML or DDL

  • ChangeWindow_EndTimestamp: end time (using the Snowflake service's time zone) of the interval in which the data was committed in Snowflake

  • DatabaseName: the Snowflake database name

  • SourceName: value is always SNOWFLAKE

  • SourceNamespace: the Striim namespace of the Snowflake Reader instance

  • Timestamp: Striim system time when the event was received from Snowflake

data: an array of fields, numbered from 0, containing:

  • for an INSERT operation, the values that were inserted

  • for an UPDATE, the values after the operation was completed

  • for a DELETE, the value of the primary key and nulls for the other fields

To retrieve the values for these fields, use SELECT ... (DATA[]). See Parsing the fields of WAEvent for CDC readers.

before: for UPDATE operations, contains the primary key value from before the update. When an update changes the primary key value, you may retrieve the previous value using the BEFORE() function.

dataPresenceBitMap, beforePresenceBitMap, and typeUUID are reserved and should be ignored.

Snowflake Reader sample output

The following examples are based on this table:

CREATE TABLE AATHITHYA.TEST.SAMPLETABLE (
COL1 NUMBER PRIMARY KEY, 
COL2 FLOAT, 
COL3 VARCHAR, 
COL4 BINARY, 
C0L5 BOOLEAN, 
COL6 DATE, 
COL7 TIME, 
COL8 TIMESTAMP_LTZ, 
COL9 TIMESTAMP_NTZ, 
COL10 TIMESTAMP_TZ, 
COL11 OBJECT,
COL12 ARRAY,
COL13 VARIANT
);

For the following INSERT:

INSERT INTO AATHITHYA.TEST.SAMPLETABLE VALUES (1, 0.2, 'Striim', '48454C50', TRUE, '2002-03-05', 
'23:59:59.9999', '2020-03-12 01:02:03.123456789', '2020-03-12 01:02:03.123456789', '
2020-03-12 01:02:03.123456789 -0600', NULL, NULL, 1);

The output would be:

T1: WAEvent{
  data: [1,0.2,"Striim","48454C50",true,[ 2002, 3, 5 ],"23:59:59.9999","2020-03-12T01:02:03.123456789-07:00",
    "2020-03-12T01:02:03.123456789","2020-03-12T01:02:03.123456789-06:00",null,null,"1"]
  metadata: {
  "TableName" : "TEST.SAMPLETABLE",
  "ChangeWindow_StartTimestamp" : "2024-05-13T22:50:37.431-07:00",
  "ColumnCount" : 13,
  "RowId" : "74aa5e2e7039d0361c5923b41dae2bc8daced633",
  "OperationName" : "INSERT",
  "ChangeWindow_EndTimestamp" : "2024-05-13T22:50:47.431-07:00",
  "DatabaseName" : "AATHITHYA",
  "SourceName" : "SNOWFLAKE",
  "SourceNamespace" : "admin",
  "Timestamp" : 1715665848.535279000,
  "EventType":"DML"
  }
  userdata: null
  before: null
  dataPresenceBitMap: "fz8="
  beforePresenceBitMap: "AAA="
  typeUUID: {
  "uuidstring" : "01ef11b5-ddd9-4051-9835-3ee77256adfd"
  }
};

For the following UPDATE:

UPDATE AATHITHYA.TEST.SAMPLETABLE SET COL12 = ARRAY_CONSTRUCT(12, 'twelve') WHERE COL1 = 1;

The output would be:

T1: WAEvent{
  data: [1,0.2,"Striim","48454C50",true,[ 2002, 3, 5 ],"23:59:59.9999","2020-03-12T01:02:03.123456789-07:00",
    "2020-03-12T01:02:03.123456789","2020-03-12T01:02:03.123456789-06:00",null,"[\n  12,\n  \"twelve\"\n]","1"]
  metadata: {
  "TableName" : "TEST.SAMPLETABLE",
  "ChangeWindow_StartTimestamp" : "2024-05-13T22:52:47.431-07:00",
  "ColumnCount" : 13,
  "RowId" : "74aa5e2e7039d0361c5923b41dae2bc8daced633",
  "OperationName" : "UPDATE",
  "ChangeWindow_EndTimestamp" : "2024-05-13T22:52:57.431-07:00",
  "DatabaseName" : "AATHITHYA",
  "SourceName" : "SNOWFLAKE",
  "SourceNamespace" : "admin",
  "Timestamp" : 1715665978.893219000,
  "EventType":"DML"
  }
  userdata: null
  before: [ 1, 0.2, "Striim", "48454C50", true, [ 2002, 3, 5 ], "23:59:59.9999", 1584000123.123456789, 
    [ 2020, 3, 12, 1, 2, 3, 123456789 ], 1583996523.123456789, null, null, "1" ]
  dataPresenceBitMap: "fz8="
  beforePresenceBitMap: "fz8="
  typeUUID: {
  "uuidstring" : "01ef11b5-ddd9-4051-9835-3ee77256adfd"
  }
};

For the following DELETE:

DELETE FROM AATHITHYA.TEST.SAMPLETABLE WHERE COL1 = 1;

The output would be:

T1: WAEvent{
  data: [1,0.2,"Striim","48454C50",true,[ 2002, 3, 5 ],"23:59:59.9999","2020-03-12T01:02:03.123456789-07:00",
    "2020-03-12T01:02:03.123456789","2020-03-12T01:02:03.123456789-06:00",null,"[\n  12,\n  \"twelve\"\n]","1"]
  metadata: {
  "TableName" : "TEST.SAMPLETABLE",
  "ChangeWindow_StartTimestamp" : "2024-05-13T22:55:17.431-07:00",
  "ColumnCount" : 13,
  "RowId" : "74aa5e2e7039d0361c5923b41dae2bc8daced633",
  "OperationName" : "DELETE",
  "ChangeWindow_EndTimestamp" : "2024-05-13T22:55:27.431-07:00",
  "DatabaseName" : "AATHITHYA",
  "SourceName" : "SNOWFLAKE",
  "SourceNamespace" : "admin",
  "Timestamp" : 1715666128.735648000,
  "EventType":"DML"
  }
  userdata: null
  before: null
  dataPresenceBitMap: "fz8="
  beforePresenceBitMap: "AAA="
  typeUUID: {
  "uuidstring" : "01ef11b5-ddd9-4051-9835-3ee77256adfd"
  }
};

For the following DDL:

alter table AATHITHYA.TEST.SAMPLETABLE DROP COLUMN COL13;

The output would be:

T1: WAEvent{
  data: ["ALTER TABLE \"TEST\".\"SAMPLETABLE\" DROP COLUMN \"COL13\" ;"]
  metadata: {"CDDLMetadata":{"table":{"fullyQualifiedName":"\"TEST\".\"SAMPLETABLE\"","name":"\"SAMPLETABLE\"",
"dbEntityType":"Table","tableMappingId":null,"type":"TABLE","identifier":{"plainName":"\"TEST\".\"SAMPLETABLE\"",
"firstPart":null,"secondPart":{"plainName":"\"TEST\"","name":"\"TEST\"","metadata":{"metaEscape":null,
"multiWildcard":"%","singleWildcard":"_","bugEscape":"","encloser":"\"","backSlashEscapeSequence":"\\",
"databaseName":"","escapeSequence":"\\","case":"IGNORE_CASE"}},"thirdPart":{"plainName":"\"SAMPLETABLE\"",
"name":"\"SAMPLETABLE\"","metadata":{"metaEscape":null,"multiWildcard":"%","singleWildcard":"_",
"bugEscape":"","encloser":"\"","backSlashEscapeSequence":"\\","databaseName":"","escapeSequence":"\\",
"case":"IGNORE_CASE"}},"name":"\"TEST\".\"SAMPLETABLE\"","metadata":{"metaEscape":null,"multiWildcard":"%",
"singleWildcard":"_","bugEscape":"","encloser":"\"","backSlashEscapeSequence":"\\","databaseName":"",
"escapeSequence":"\\","case":"IGNORE_CASE"}},"schemaName":"\"TEST\"","databaseName":null,
"columns":[{"fullyQualifiedName":null,"name":"\"COL13\"","dbEntityType":"Column",
"identifier":{"plainName":"\"COL13\"","name":"\"COL13\"","metadata":{"metaEscape":null,
"multiWildcard":"%","singleWildcard":"_","bugEscape":"","encloser":"\"","backSlashEscapeSequence":"\\",
"databaseName":"snowflake","escapeSequence":"\\","case":"UPPER_CASE"}},"index":null,
"striimType":"null_STRIIM_UNKNOWN","sourceDataType":null,"targetDataType":null,
"targetDataTypeSyntax":null,"sourceDataLength":null,"targetDataLength":null,"sourcePrecision":null,
"targetPrecision":null,"sourceScale":null,"targetScale":null,"nullable":false,"unique":false,
"stringRepresentation":"Column{name='\"COL13\"', identifier=\"COL13\", index=null,
 striimType='null_STRIIM_UNKNOWN', sourceDataType='null', targetDataType='null', 
targetDataTypeSyntax='null', sourceDataLength=null, targetDataLength=null, sourcePrecision=null, 
targetPrecision=null, sourceScale=null, targetScale=null, nullable=false, unique=false}"}],
"foreignKeys":[],"pkColumns":[],"uqConstraints":{},"notNullConstraints":[],"recordCount":0,
"size":0,"sourceDDL":"ALTER TABLE \"TEST\".\"SAMPLETABLE\" DROP COLUMN \"COL13\" ;"},"violationList":[],
"finalCategory":"Green","sourceDbType":"snowflake","sourceDbSubType":null,"operation":"AlterColumns",
"subOperation":"DropColumn","exception":null,
"sql":"ALTER TABLE \"TEST\".\"SAMPLETABLE\" DROP COLUMN \"COL13\" ;","metaObject":{},
"objectPresenceMap":{"Table":true,"Column":true},"objectRequirementMap":{"Table":true,"Column":true},
"tableValid":true,"successful":true},"ChangeWindow_StartTimestamp":"2024-05-13T22:55:27.431-07:00",
"OperationName":"AlterColumns","EventType":"DDL","DatabaseName":"AATHITHYA","SourceName":"SNOWFLAKE",
"OperationSubName":"DropColumn","SourceNamespace":"admin","Timestamp":1715669128.735648000,
"TableName":"TEST.SAMPLETABLE","ChangeWindow_EndTimestamp":"2024-05-13T22:55:37.431-07:00",
"OperationType":"DDL","PREVIOUS_TYPE_UUID":{"uuidstring":"01ef11b5-ddd9-4051-9835-3ee77256adfd"}}
  userdata: null
  before: null
  dataPresenceBitMap: "AQ=="
  beforePresenceBitMap: "AA=="
  typeUUID: {"uuidstring":"01ef3775-09cf-a841-a1ea-0282642f9e1e"}
};

Snowflake Reader data type support and correspondence

See also Target data type support & mapping for Snowflake sources.

Note

If a table contains a column of a geospatial type (GEOGRAPHY or GEOMETRY), Snowflake Reader cannot capture UPDATE or DELETE operations. This is a limitation of Snowflake's Change Streams (see Guides > Data Loading > Streams > Introduction to Streams > Types of Streams and Compilation errors when select from a Stream : Invalid argument types for function 'EQUAL_NULL': (GEOMETRY, GEOMETRY).

Snowflake type

aliases

Striim type

notes

ARRAY

java.lang.String

BINARY

VARBINARY

java.lang.String

BOOLEAN

java.lang.Boolean

DATE

java.time.LocalDate

FLOAT

DOUBLE, DOUBLE PRECISION, FLOAT4, FLOAT8, REAL

java.lang.Double

GEOGRAPHY

java.lang.String

see note above

GEOMETRY

java.lang.String

see note above

NUMBER

BIGINT, BYTEINT, DECIMAL, INT, INTEGER, NUMERIC, SMALLINT, TINYINT

Java.lang.Integer

OBJECT

java.lang.String

TIME

java.time.String

TIMESTAMP_LTZ

java.time.ZonedDateTime

TIMESTAMP_NTZ

java.time.LocalDateTime

TIMESTAMP_TZ

java.time.ZonedDateTime

VARCHAR

CHAR, CHAR VARYING, CHARACTER, NCHAR, NCHAR VARYING, NVARCHAR, NVARCHAR2, STRING, TEXT

java.lang.String

VARIANT

java.lang.String

Target data type support & mapping for Snowflake sources

The table below maps the data types of a Showflake source to the data types of a target such as Azure Synapse, BigQuery, Databricks, and so on when you create an application using a wizard with Auto Schema Creation, perform an initial load using Database Reader with Create Schema enabled, or run the schema conversion utility, or when Striim schema evolution creates or alters target tables.

If a table contains a column of a geospatial type (GEOGRAPHY or GEOMETRY), Snowflake Reader cannot capture UPDATE or DELETE operations. This is a limitation of Snowflake's Change Streams (see Guides > Data Loading > Streams > Introduction to Streams > Types of Streams and Compilation errors when select from a Stream : Invalid argument types for function 'EQUAL_NULL': (GEOMETRY, GEOMETRY).

If your screen is not wide enough to display the whole table, click in a cell and drag left to view the right-hand columns.

Showflake Data Type

Azure Synapse

BigQuery

Databricks

Db2 for z/OS

Fabric Mirror

Iceberg

MariaDB

Microsoft Fabric

MySQL

Oracle

PostgreSQL

Snowflake

Spanner

Spanner PG Dialect

SQL Server

Vertica

Yellowbrick

ARRAY

varchar(8000)

string

string

varchar(250)

string

string

text

varchar(max)

text

VARCHAR2(200)

character varying

VARCHAR

STRING(MAX)

text

varchar(8000)

varchar

character varying

BINARY

binary

bytes(p)

binary

blob

bytes

binary

Not supported

varbinary(max)

Not supported

BLOB

bytea

BINARY

BYTES(p)

Not supported

binary

long varbinary(p)

character varying(40000)

BINARY(p)

binary(p), if (p) <= 8000

binary, if (p) > 8000*

bytes, if (p) > 9223372036854775807*

bytes(p), if (p) <= 9223372036854775807

binary

binary(255), if (p) <= 127

blob, if (p) > 127*

bytes

binary

binary(p), if (p) <= 255

varbinary(p), if (p) <= 8000

varbinary(max), if (p) > 8000*

binary(p), if (p) <= 255

BLOB

bytea, if (p) <= 2147483647

BINARY, if (p) <= 8388608

BINARY, if (p) > 8388608*

BYTES(MAX), if (p) > 10485760*

BYTES(p), if (p) <= 10485760

bytea, if (p) <= 1048576

binary(p), if (p) <= 8000

binary, if (p) > 8000*

long varbinary(p), if 65000 <= (p) <= 32000000

binary(p), if (p) <= 65000

character varying(40000)

BOOLEAN

BIT

boolean

boolean

Not supported

boolean

boolean

bool

BIT

bool

CHAR(5)

boolean

BOOLEAN

BOOL

boolean

BIT

boolean

boolean

DATE

date

date

date

date

date

date

date

date

date

date

date

date

DATE

date

date

date

date

DOUBLE

float(53)

float64

double

double

double

double

double

float(53)

double

double precision

double precision

float

FLOAT64

float8

float(53)

float

double precision

GEOGRAPHY

varchar(8000)

string

string

varchar(250)

string

string

text

varchar(max)

text

VARCHAR2(200)

character varying

VARCHAR

STRING(MAX)

text

varchar(8000)

varchar

character varying

GEOMETRY

varchar(8000)

string

string

varchar(250)

string

string

text

varchar(max)

text

VARCHAR2(200)

character varying

VARCHAR

STRING(MAX)

text

varchar(8000)

varchar

character varying

NUMBER

numeric(p,s)

bignumeric

decimal(p,s)

varchar(1000)

decimal

decimal(p,s)

decimal(p,s)

decimal(p,s)

decimal(p,s)

number(p,s)

numeric(p,s)

numeric(p,s)

NUMERIC

numeric

numeric(p,s)

numeric(p,s)

decimal(p,s)

NUMBER(p,0)

numeric(p,s), if (p) <= 38, if (s) <= 38

numeric, if (p) <= 29, if (s) <= 9

decimal(p,s), if (p) <= 38, if (s) <= 37

numeric(p,s), if (p) <= 31, if (s) <= 30

decimal, if (p) <= 38

decimal(p,s), if (p) <= 38, if (s) <= 38

decimal(p,s), if (p) <= 65, if (s) <= 30

decimal(p,s), if (p) <= 38, if (s) <= 38

decimal(p,s), if (p) <= 65, if (s) <= 30

number(p,s), if (p) <= 38, if (s) <= 127

numeric(p,s), if (p) <= 1000, if (s) <= 1000

numeric(p,s), if (p) <= 38, if (s) <= 37

NUMERIC, if (p) <= 38, if (s) <= 9

numeric, if (p) <= 131072, if (s) <= 16384

numeric(p,s), if (p) <= 38, if (s) <= 38

numeric(p,s), if (p) <= 1024, if (s) <= 1024

decimal(p,s), if (p) <= 38, if (s) <= 38

NUMBER(p,s)

varchar(8000), if (s) > 38*

varchar(8000), if (p,s) > 38*

numeric(p,s), if (p) <= 38, if (s) <= 38

string, if (s) > 38*

string, if (p,s) > 39*

numeric, if (p) <= 29, if (s) <= 9

bignumeric, if 29 <= (p) <= 39, if 9 <= (s) <= 38

string, if (p,s) > 38*

decimal(p,s), if (p) <= 38, if (s) <= 37

string, if (s) > 37*

numeric(p,s), if (p) <= 31, if (s) <= 30

varchar(1000), if (p,s) > 31, if (s) > 30*

string, if (p,s) > 38*

decimal, if (p) <= 38

string, if (p,s) > 38*

string, if (s) > 38*

decimal(p,s), if (p) <= 38, if (s) <= 38

TEXT, if (p,s) > 65, if (s) > 30*

decimal(p,s), if (p) <= 65, if (s) <= 30

varchar(max), if (s) > 38*

varchar(max), if (p,s) > 38*

decimal(p,s), if (p) <= 38, if (s) <= 38

TEXT, if (p,s) > 65, if (s) > 30*

decimal(p,s), if (p) <= 65, if (s) <= 30

number, if (s) > 127*

number, if (p,s) > 38*

number(p,s), if (p) <= 38, if (s) <= 127

double precision, if (s) > 1000

double precision, if (p,s) > 1000

numeric(p,s), if (p) <= 1000, if (s) <= 1000

VARCHAR, if (s) > 37*

numeric(p,s), if (p) <= 38, if (s) <= 37

VARCHAR, if (p,s) > 38*

STRING(MAX), if (p,s) > 308, if (s) > 15*

NUMERIC, if (p) <= 38, if (s) <= 9

numeric, if (p) <= 131072, if (s) <= 16384

varchar(8000), if (s) > 38*

varchar(8000), if (p,s) > 38*

numeric(p,s), if (p) <= 38, if (s) <= 38

numeric(p,s), if (p) <= 1024, if (s) <= 1024

character varying, if (s) > 38*

character varying, if (p,s) > 38*

decimal(p,s), if (p) <= 38, if (s) <= 38

OBJECT

varchar(8000)

string

string

clob

string

string

json

varchar(max)

json

clob

json

VARIANT

STRING(MAX)

jsonb

varchar(max)

long varchar(10000000)

character varying

TIME

time

time

string

time

time-millis

string

time

time(6)

time

VARCHAR2(150)

time

time

STRING(150)

varchar(150)

time

time

time

TIME(p)

time

time

string

time

time-millis

string

time

time(6)

time

VARCHAR2(150)

time

time

STRING(150)

varchar(150)

time

time

time

TIMESTAMPLTZ

datetime2

timestamp

timestamp

timestamp

local-timestamp-millis

string

timestamp

datetime2

timestamp

timestamp with local time zone

timestamp without time zone

timestamp

TIMESTAMP

timestamptz

datetime2

timestamp

timestamp

TIMESTAMPLTZ(p)

datetime2

timestamp

timestamp

timestamp

local-timestamp-millis

string

timestamp

datetime2

timestamp

timestamp with local time zone

timestamp without time zone

timestamp

TIMESTAMP

timestamptz

datetime2

timestamp

timestamp

TIMESTAMPNTZ

datetime2

timestamp

timestamp

timestamp

timestamp-millis

timestamp_ntz

datetime

datetime2(6)

datetime

timestamp

timestamp without time zone

timestamp

TIMESTAMP

timestamptz

datetime2

timestamp

timestamp

TIMESTAMPNTZ(p)

datetime2

timestamp

timestamp

timestamp

timestamp-millis

timestamp_ntz

datetime

datetime2(6)

datetime

timestamp

timestamp without time zone

timestamp

TIMESTAMP

timestamptz

datetime2

timestamp

timestamp

TIMESTAMPTZ

datetimeoffset

timestamp

timestamp

timestamp with time zone

string

timestamp

timestamp

datetime2(6)

timestamp

timestamp with time zone

timestamp with time zone

timestamp with time zone

TIMESTAMP

timestamptz

datetimeoffset

timestamp with time zone

timestamptz

TIMESTAMPTZ(p)

datetimeoffset

timestamp

timestamp

timestamp with time zone

string

timestamp

timestamp

datetime2(6)

timestamp

timestamp with time zone

timestamp with time zone

timestamp with time zone

TIMESTAMP

timestamptz

datetimeoffset

timestamp with time zone

timestamptz

VARCHAR

varchar(p)

varchar(8000)

string

string

clob(p)

varchar(p)

string

varchar(p)

longtext

varchar(p)

varchar(p)

varchar(max)

longtext

varchar(p)

clob

VARCHAR2(p)

character varying(p)

VARCHAR(p)

STRING(p)

STRING(MAX)

text

varchar(p)

varchar(p)

varchar(max)

long varchar(p)

character varying(p)

VARCHAR(p)

varchar(p), if (p) <= 8000

varchar(8000), if (p) > 8000*

string

string

varchar(p), if (p) <= 4045

clob(p), if 4045 <= (p) <= 2147483647

string

string, if (p) > 2147483647

varchar(p), if (p) <= 2147483647

longtext, if (p) > 2147483647*

longtext, if 65535 <= (p) <= 2147483647

varchar(p), if (p) <= 65535

varchar(max), if (p) > 8000*

varchar(p), if (p) <= 8000

longtext, if (p) > 2147483647*

longtext, if 65535 <= (p) <= 2147483647

varchar(p), if (p) <= 65535

clob, if (p) > 4000*

VARCHAR2(p), if (p) <= 4000

character varying(p), if (p) <= 10485760

character varying, if (p) > 10485760*

VARCHAR, if (p) > 16777216*

VARCHAR(p), if (p) <= 16777216

STRING(MAX), if (p) > 2621440

STRING(p), if (p) <= 2621440

varchar(p), if (p) <= 1048576

text, if (p) > 1048576*

varchar(max), if (p) > 8000*

varchar(p), if (p) <= 8000

varchar(p), if (p) <= 65000

long varchar(p), if 65000 <= (p) <= 32000000

character varying, if (p) > 64000*

character varying(p), if (p) <= 64000

VARIANT

varchar(8000)

string

string

clob

string

string

LONGTEXT

varchar(max)

LONGTEXT

clob

text

VARCHAR

STRING(MAX)

text

varchar(max)

long varchar

character varying

*When using the schema conversion utility, these mappings appear in converted_tables_with_striim_intelligence.sql.