Skip to main content

Oracle Database

Striim offers two CDC readers to read data from your Oracle databases: Oracle Reader and OJet.

Oracle Reader and OJet can both read from Oracle databases 11g and higher, RAC, and PDB / CDB. Both can read from a primary database, logical standby database, or Active Data Guard standby database.

OJet's primary advantage over Oracle Reader is higher throughput. Striim recently published a white paper documenting OJet's ability to read 160 GB of CDC data per hour: see Real Time Data Streaming from Oracle to Google BigQuery and Real-time Data Integration from Oracle to Google BigQuery Using Striim.

See the table below for a detailed feature comparison.

Feature comparison: Oracle Reader and OJet

Oracle Reader

OJet

supported versions

     Oracle Database 11g Release 2 version 11.2.0.4

     Oracle Database 12c Release 1 version 12.1.0.2 and Release 2 version 12.2.0.1

     Oracle Database 18c (all versions)

     Oracle Database 19c (all versions)

     Oracle Database 21c (all versions)

Known issue DEV-36641: column map in Database Writer does not work with wildcard

supported topologies

     PDB / CDB

     application PDB

 

     RAC (all versions)

can read from

     primary database

     logical standby database

     Active Data Guard non-CDB standby database

via archive log

     Active Data Guard CDB standby database

via archive log (through version 19.9 only)

 

     Data Guard physical standby

via archive log

 

     downstream database

 

     reference-partitioned tables

key features

     DML operations replicable in target

INSERT, UPDATE, DELETE

INSERT, UPDATE, DELETE, TRUNCATE

     schema evolution

for 11g to 18c only, not for PDB / CDB

for all supported versions

     uncommitted transaction support

 

     Striim-side transaction caching

 

     recovery

     quiesce

     bidirectional replication

     SSL

     Oracle Native Network Encryption

 

used automatically in Striim Cloud

not supported in Striim Platform in this release

     supported when Striim Platform is running in Microsoft Windows

 

summary of supported data types (for full details, see Oracle Reader and OJet data type support and correspondence)Oracle Reader and OJet data type support and correspondence

  • BINARY_DOUBLE, BINARY_FLOAT, CHAR, DATE, FLOAT, INTERVALDAYTOSECOND, INTERVALYEARTOMONTH, JSON, NCHAR, NUMBER, NVARCHAR2, RAW, TIMESTAMP, TIMESTAMP WITH LOCAL TIME ZONE, TIMESTAMP WITH TIME ZONE, VARCHAR2

  • ROWID

  • VARRAY

see Oracle Reader and OJet data type support and correspondenceOracle Reader and OJet data type support and correspondence

  • BLOB, CLOB, LONG, LONG RAW, XMLTYPE

see Oracle Reader and OJet data type support and correspondenceOracle Reader and OJet data type support and correspondence

  • BFILE

see Oracle Reader and OJet data type support and correspondenceOracle Reader and OJet data type support and correspondence

  • ADT, NESTED TABLE, SD0_GEOMETRY, UDT, UROWID

Striim provides templates for creating applications that read from Oracle and write to various targets. See Creating an application using a template for details.

To learn more about these CDC readers or purchase them, Contact Striim support.

Configuring Oracle to use Oracle Reader

Using Oracle Reader requires the following configuration changes in Oracle:

  • enable archivelog, if not already enabled

  • enable supplemental log data, if not already enabled

  • set up LogMiner

  • create a user account for OracleReader and grant it the privileges necessary to use LogMiner

Basic Oracle configuration tasks

The following tasks must be performed regardless of which Oracle version or variation you are using.

Enable archivelog

  1. Log in to SQL*Plus as the sys user.

  2. Enter the following command:

    select log_mode from v$database;

    If the command returns ARCHIVELOG, it is enabled. Skip ahead to Enabling supplemental log data.

  3. If the command returns NOARCHIVELOG, enter: shutdown immediate

  4. Wait for the message ORACLE instance shut down, then enter: startup mount

  5. Wait for the message Database mounted, then enter:

    alter database archivelog;
    alter database open;
  6. To verify that archivelog has been enabled, enter select log_mode from v$database; again. This time it should return ARCHIVELOG.

Enable supplemental log data

If you are using Amazon RDS for Oracle, see the instructions below.

  1. Enter the following command:

    select supplemental_log_data_min, supplemental_log_data_pk from v$database;

    If the command returns YES or IMPLICIT, supplemental log data is already enabled. For example, 

    SUPPLEME SUP
    -------- ---
    YES      NO

    indicates that supplemental log data is enabled, but primary key logging is not. If it returns anything else, enter:

    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
  2. To enable primary key logging for all tables in the database enter: 

    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

    Alternatively, to enable primary key logging only for selected tables (do not use this approach if you plan to use wildcards in the OracleReader Tables property to capture change data from new tables):

    ALTER TABLE <schema name>.<table name> ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
  3. If replicating Oracle data to one of the following

    • Azure Synapse with Mode set to MERGE

    • BigQuery with Optimized Merge disabled

    • Redshift

    • Snowflake with Optimized Merge disabled

    Enable supplemental logging on all columns for all tables in the source database:

    ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

    Alternatively, to enable only for selected tables:

    ALTER TABLE <schema>.<table name> ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
  4. To activate your changes, enter:

    alter system switch logfile;

If using Amazon RDS for Oracle, use the following commands instead:

exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD');
exec rdsadmin.rdsadmin_util.alter_supplemental_logging(p_action => 'ADD', p_type => 'PRIMARY KEY');
exec rdsadmin.rdsadmin_util.switch_logfile;
select supplemental_log_data_min, supplemental_log_data_pk from v$database;

Create an Oracle user with LogMiner privileges

You may use LogMiner with any supported Oracle version.

Log in as sysdba and enter the following commands to create a role with the privileges required by the Striim OracleReader adapter and create a user with that privilege. You may give the role and user any names you like. Replace ******** with a strong password.

If using Oracle 11g, or 12c, 18c, or 19c without CDB

If using Oracle 11g, or 12c, 18c, or 19c without CDB

Enter the following commands:

create role striim_privs;
grant create session,
  execute_catalog_role,
  select any transaction,
  select any dictionary
  to striim_privs;
grant select on SYSTEM.LOGMNR_COL$ to striim_privs;
grant select on SYSTEM.LOGMNR_OBJ$ to striim_privs;
grant select on SYSTEM.LOGMNR_USER$ to striim_privs;
grant select on SYSTEM.LOGMNR_UID$ to striim_privs;
create user striim identified by ******** default tablespace users;
grant striim_privs to striim;
alter user striim quota unlimited on users;

For Oracle 12c or later, also enter the following command:

grant LOGMINING to striim_privs;

If using Database Vault, omit execute_catalog_role, and also enter the following commands:

grant execute on SYS.DBMS_LOGMNR to striim_privs;
grant execute on SYS.DBMS_LOGMNR_D to striim_privs;
grant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to striim_privs;
grant execute on SYS.DBMS_LOGMNR_SESSION to striim_privs;

If using Oracle 12c, 18c, or 19c with PDB

Enter the following commands. Replace <PDB name> with the name of your PDB.

create role c##striim_privs;
grant create session,
execute_catalog_role,
select any transaction,
select any dictionary,
logmining
to c##striim_privs;
grant select on SYSTEM.LOGMNR_COL$ to c##striim_privs;
grant select on SYSTEM.LOGMNR_OBJ$ to c##striim_privs;
grant select on SYSTEM.LOGMNR_USER$ to c##striim_privs;
grant select on SYSTEM.LOGMNR_UID$ to c##striim_privs;
create user c##striim identified by ******* container=all;
grant c##striim_privs to c##striim container=all;
alter user c##striim set container_data = (cdb$root, <PDB name>) container=current;

If using Database Vault, omit execute_catalog_role, and also enter the following commands:

grant execute on SYS.DBMS_LOGMNR to c##striim_privs;
grant execute on SYS.DBMS_LOGMNR_D to c##striim_privs;
grant execute on SYS.DBMS_LOGMNR_LOGREP_DICT to c##striim_privs;
grant execute on SYS.DBMS_LOGMNR_SESSION to c##striim_privs;

Creating the QUIESCEMARKER table for Oracle Reader

To allow Striim to quiesce (see QUIESCE) an application that uses Oracle Reader, you must create a quiescemarker table in Oracle. (This is not necessary when Reading from a standby or using OJet.)

The DDL for creating the table is: 

CREATE TABLE QUIESCEMARKER (source varchar2(100), 
  status varchar2(100),
  sequence NUMBER(10),
  inittime timestamp, 
  updatetime timestamp default sysdate, 
  approvedtime timestamp, 
  reason varchar2(100), 
  CONSTRAINT quiesce_marker_pk PRIMARY KEY (source, sequence));
ALTER TABLE QUIESCEMARKER ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

The Oracle user specified in Oracle Reader's Username property must have SELECT, INSERT, and  UPDATE privileges on this table.

Reading from a standby

Oracle Reader can read from a standby rather than a primary database.

Requirements

Limitations

  • Oracle Reader will read only from the archive log, not from redo logs.

  • Bidirectional replication is not supported.

  • Oracle Reader will reject QUIESCE if there are any open transactions.

Create the dictionary file

  1. On the primary, use SQL Plus or another client to create a dictionary file.

    For Oracle 11g or 12.1.0.2, enter the following commands, replacing <path> in the second command with the path returned by the first command. If the first command does not return a path, you must set UTL_FILE_DIR.

    show parameter utl_file_dir;
    execute dbms_logmnr_d.build('dict.ora', '<path>');

    For Oracle 12.2.0.1.0 or later, enter the following commands.

    CREATE DIRECTORY "dictionary_directory" AS '/opt/oracle/dictionary';
    EXECUTE dbms_logmnr_d.build(dictionary_location=>'dictionary_directory', 
    dictionary_filename=>'dict.ora',
    options => dbms_logmnr_d.store_in_flat_file);
  2. Copy dict.ora to a directory on the standby.

Configure Oracle Reader properties in your application

  1. Set Dictionary Mode to ExternalDictionaryFileCatalog.

  2. Set Database Role to PHYSICAL_STANDBY.

  3. Set External Dictionary File to the fully qualified name of the dictionary file you copied to the standby, for example, /home/oracle/dict.ora

Handling DDL changes

When DDL changes must be made to the tables being read by Oracle Reader, do the following:

  1. On the primary, stop DML activity and make sure there are no open transactions.

  2. On the primary, force a log switch.

  3. In Striim, quiesce the application (see QUIESCE). If Oracle Reader refuses the quiesce, wait a few minutes and try again.

  4. On the primary, perform the DDL changes.

  5. Repeat the procedure in "Create the dictionary file," above, replacing the old file on the standby with the new one.

  6. Start the Striim application.

Configuring Oracle to use OJet

OJet requires a special license. For more information, Contact Striim support.

OJet is supported only when Striim is running on Linux. glibc version 14 or later must be installed before deploying OJet.

In an Oracle RAC environment, OJet must connect to a SCAN listener (see About Connecting to an Oracle RAC Database Using SCANs).

OJet can read from:

Oracle configuration varies depending on which of these topologies you use.

  1. For all topologies, Enable archivelog.

  2. For logical standby only, set DATABASE GUARD to standby to prevent users other than SYS from making changes to the standby's data:

    ALTER DATABASE GUARD standby;
  3. For a single primary database or logical standby, Running the OJet setup script on Oracle.

    For Active Data Guard, follow the instructions in Configuring Active Data Guard to use OJet.

  4. Execute the following command to extract the database dictionary to the redo log. Going forward, run this command once a week.

    EXECUTE DBMS_LOGMNR_D.BUILD( OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
  5. If there may be open transactions when you start an OJet application, run the following command to get the current SCN, and specify it as the Start Scn value in the application's OJet properties.

    SELECT CURRENT_SCN FROM V$DATABASE;

Enable archivelog

  1. Log in to SQL*Plus as the sys user.

  2. Enter the following command:

    select log_mode from v$database;

    If the command returns ARCHIVELOG, it is enabled. Skip ahead to Enabling supplemental log data.

  3. If the command returns NOARCHIVELOG, enter: shutdown immediate

  4. Wait for the message ORACLE instance shut down, then enter: startup mount

  5. Wait for the message Database mounted, then enter:

    alter database archivelog;
    alter database open;
  6. To verify that archivelog has been enabled, enter select log_mode from v$database; again. This time it should return ARCHIVELOG.

Running the OJet setup script on Oracle

The following instructions assume that OJet will be reading from a single primary database or a logical standby. If you are using Active Data Guard and reading from a downstream database, follow the instructions Configuring Active Data Guard to use OJet instead.

Before running the setup script, create an Oracle user for use by OJet. In a CDB environment, this must be a common user. There is no need to assign privileges, those will be added by the setup script. Do not set IDLE_TIME on this user's session or the OJet application will halt.

The setup script is striim/tools/bin/setupOJet.sh. The syntax is:

setupOJet.sh <connection URL> [check] <sys user name> <password> <ojet user name> [y [y] [<file name>]]
  • check: include the check option to check whether the Oracle environment is ready to work with OJet. The output will look something like this:

    Configuration check for OJet using user sys as sysdba started.
    Settings for downstream capture connecting to the capture database, using user sys as sysdba
    
    Checking existing configuration:
    Apply server owners: 0
    Rule owners: 0
    Queue table owners: 0
    Capture owners: 0
    Queue owners: 0
    Rule set owners: 0
    Checking required database options
    Checking if required patches are installed
    Warning: patch 31254535 not found to be installed
    Warning: patch 28209272 not found to be installed
    Warning: patch 34470389 not found to be installed
    Warning: patch 32338220 not found to be installed
    Warning: patch 34010877 not found to be installed

    This indicates that you must install the five specified patches before running the setup script.

  • connection URL: either host:port:SID or host:port/service

  • sys user name: an Oracle user with DBA privilege that can connect as sysdba. You may need to configure REMOTE_LOGIN_PASSWORDFILE.

  • password: the specified sys user's password

  • ojet_user: the name of the Oracle user you created before running setupOJet.sh

  • remote (for downstream setup only): y

  • source (for downstream setup only): y

  • file (for downstream setup only): file name with tables for instantiation at downstream source

Examples (replace the IP address, SID, and password with those for your environment):

  • for a single primary database or a logical standby: setupOJet.sh 203.0.113.49:1521:orcl sys ******** OJET_USER

  • for the Active Data Guard standby server: setupOJet.sh 203.0.113.49:1521:orcl sys ******** OJET_USER y y

  • for the Active Data Guard downstream server: setupOJet.sh 203.0.113.49:1521:orcl sys ******** OJET_USER y

If the script reports that an Oracle fix is missing, install it and run the script again.

The script's output should be similar to this:

./setupOJet.sh localhost:1521:ORCL sys oracle OJET_USER
  Configuration for OJet using user OJET_USER started: 
  Granted resources to user OJET_USER
  Granted select any dictionary privilege to user OJET_USER
  Enabled replication
  Enabled streaming
  Enabled supplemental logging
  Building dictionary log …
  Done

Configuring Active Data Guard to use OJet

Two types of downstream configurations are supported, real-time and archivelog. The difference between them is how redo changes are shipped from the source database to the downstream database, and where the capture process will run and OJet will connect.

In an Active Data Guard environment, the physical standby database is in read-only mode, so OJet cannot attach directly to it. Thus a cascaded setup is required,. A cascaded redo transport destination (also known as a terminal destination) receives the primary database redo indirectly from a standby database, rather than directly from a primary database. Oracle documentation for setting up a cascaded set up needs to be followed. For more information, see Cascaded Redo Transport Destinations.

For Active Data Guard, the standby database needs to be in recovery mode so that metadata is in sync with the primary database. The required steps for setup are outlined below, with differences between the two setups. Refer to Oracle documentation for the steps to perform these changes.

The passwords for the sys and OJet users must be the same on the standby and downstream databases.

Primary database setup

Primary database setup

Run the DBMS_CAPTURE_ADM.BUILD procedure on the primary database to extract the data dictionary to the redo log when a capture process is created:

DBMS_CAPTURE_ADM.BUILD();

Primary or standby database setup

Configure the following settings on both the primary or standby and the source databases:

  1. Enable archivelog.

  2. Add the connection details for the downstream database to: $TNS_ADMIN/tnsnames.ora

  3. To configure the standby to replicate to the downstream database, add a new log_archive_dest in the standby, depending on the type of downstream configuration (change the identifiers log_archive_dest_3, ORCL, and INST1 to reflect your environment).

    • For real-time capture:

      ALTER SYSTEM set log_archive_dest_3='service=INST1 ASYNC NOREGISTER
        VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCL' scope=both;
      SHOW PARAMETER log_archive_dest_3;
      ALTER SYSTEM set log_archive_dest_state_3=enable scope=both;
      
    • For archivelog capture:

      ALTER SYSTEM set log_archive_dest_3='SERVICE=inst1 ASYNC NOREGISTER
        VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
        TEMPLATE=/opt/oracle/product/19c/dbhome_1/srl_dbs/dbs1_arch_%t_%s_%r.log
        DB_UNIQUE_NAME=ORCL' scope=both;
      SHOW PARAMETER log_archive_dest_state_3;
      ALTER SYSTEM set log_archive_dest_state_3=enable scope=both;
      
  4. Before running the setup script, create an Oracle user for use by OJet. In a CDB environment, this must be a common user. There is no need to assign privileges, those will be added by the setup script. Do not set IDLE_TIME on this user's session or the OJet application will halt.

  5. Run the setup script (see Running the OJet setup script on Oracle), appending a single y parameter. For example:

    setupOJet.sh 203.0.113.49:1521:orcl sys ******** OJET_USER y y

Downstream database setup

Configure the following settings on the downstream database:

  1. Add the source database connection details to: $TNS_ADMIN/tnsnames.ora.

  2. Update the log_archive_config with the source database. For example (change oradb.orcl to reflect your environment):

    ALTER SYSTEM set log_archive_config='DG_CONFIG=(oradb,orcl)' scope=both;
  3. Ensure that the local generated redo is at a different location than the source database redo:

    ALTER SYSTEM set
      LOG_ARCHIVE_DEST_1='LOCATION=/opt/oracle/product/19c/dbhome_1/dbs/
      VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)' scope=both;
  4. For real-time capture only,  add the standby log file: 

    ALTER SYSTEM set
      LOG_ARCHIVE_DEST_2='LOCATION=/opt/oracle/product/19c/dbhome_1/srl_dbs/
      VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)' scope=both; 

    There must be one more standby log file than in the source (see Create the Standby Redo Log Files). Use(SELECT COUNT(GROUP#) FROM GV$LOG to verify this:

    ALTER SYSTEM add standby logfile group 4 'slog4a.rdo' SIZE 200M;
    ALTER SYSTEM add standby logfile group 5 'slog5a.rdo' SIZE 200M; 
    ALTER SYSTEM add standby logfile group 6 'slog6a.rdo' SIZE 200M; 
    ALTER SYSTEM add standby logfile group 7 'slog7a.rdo' SIZE 200M;
  5. Before running the setup script, create an Oracle user for use by OJet. In a CDB environment, this must be a common user. There is no need to assign privileges, those will be added by the setup script. Do not set IDLE_TIME on this user's session or the OJet application will halt.

  6. Run the setup script (see Running the OJet setup script on Oracle), appending two y parameters. For example:

    setupOJet.sh 203.0.113.49:1521:orcl sys ******** OJET_USER y

Oracle Reader properties

Before you can use this adapter, Oracle must be configured as described in the parts of Configuring Oracle to use Oracle Reader that are relevant to your environment.

Note

If Oracle Reader will be deployed to a Forwarding Agent, install the required JDBC driver as described in Install the Oracle JDBC driver in a Forwarding Agent.

Before deploying an Oracle Reader application, see Runtime considerations when using Oracle Reader.

Striim provides templates for creating applications that read from Oracle and write to various targets. See Creating an application using a template for details.

The adapter properties are:

property

type

default value

notes

Bidirectional Marker Table

String

When performing bidirectional replication, the fully qualified name of the marker table (see Bidirectional replication). This setting is case-sensitive.

CDDL Action

enum

Process

18c and earlier only: see Handling schema evolution.

CDDL Capture

Boolean

False

18c and earlier only: enables schema evolution (see Handling schema evolution). When set to True, Dictionary Mode must be set to Offline Catalog and Support PDB and CDB must be False.

Committed Transactions

Boolean

True

LogMiner only: by default, only committed transactions are read. Set to False to read both committed and uncommitted transactions.

Compression

Boolean

False

If set to True, update operations for tables that have primary keys include only the primary key and modified columns, and delete operations include only the primary key. With the default value of False, all columns are included. See Oracle Reader example output for examples.

Set to True when Oracle Reader's output stream is the input stream of Cassandra Writer.

Connection Retry Policy

String

timeOut=30, retryInterval=30, maxRetries=3

With the default setting:

  • Striim will wait for the database to respond to a connection request for 30 seconds (timeOut=30).

  • If the request times out, Striim will try again in 30 seconds (retryInterval=30).

  • If the request times out on the third retry (maxRetries=3), a ConnectionException will be logged and the application will stop.

Negative values are not supported.

Connection URL

String

<hostname>:<port>:<SID> or <hostname>:<port>/<service name>

If using Oracle 12c or later with PDB, use the SID for the CDB service. (Note that with DatabaseReader and DatabaseWriter, you must use the SID for the PDB service instead.)

If using Amazon RDS for Oracle, the connection URL is <endpoint>:<port>:<DB name>. The required values are displayed at Instance Actions > see details

Database Role

String

PRIMARY

Leave set to the default value of PRIMARY except when you Reading from a standby.

Dictionary Mode

String

OnlineCatalog

Leave set to the default of OnlineCatalog except when CDDL Capture is True or you are Reading from a standby.

Excluded Tables

String

Data for any tables specified here will not be returned. For example, if Tables uses a wildcard, data from any tables specified here will be omitted. Multiple table names (separated by semicolons) and wildcards may be used exactly as for Tables.

Exclude Users

String

Optionally, specify one or more Oracle user names, separated by semicolons, whose transactions will be omitted from OracleReader output. Possible uses include:

  • omitting transactions that would cause an endless endless loop when data previously read by OracleReader is eventually written back to the same table by DatabaseWriter, for example, in the context of high-availability "active/active" replication

  • omitting transactions involving multiple gigabytes of data, thus reducing Striim's memory requirements

  • omitting long-running transactions, ensuring that OracleReader will restart from a recent SCN after Striim is restarted

External Dictionary File

String

Leave blank except when you Reading from a standby.

Fetch Size

Integer

1000

LogMiner only: the number of records the JDBC driver will return at a time. For example, if Oracle Reader queries LogMiner and there are 2300 records available, the JDBC driver will return two batches of 1000 records and one batch of 300.

Filter Transaction Boundaries

Boolean

True

With the default value of True, BEGIN and COMMIT operations are filtered out. Set to False to include BEGIN and COMMIT operations.

Ignorable Exception

String

Do not change unless instructed to by Striim support.

JAAS Configuration

String

If you Enable Kerberos authentication for Oracle and PostgreSQL, use this property to configure it.

To authenticate Oracle using a Kerberos principal, specify java.security.krb5.conf=/<path>/krb5.conf; doNotPrompt=true; principal=oracleclient@<Kerberos realm>.

For example, java.security.krb5.conf=/etc/krb5.conf; doNotPrompt=true; principal=oracleclient@MYDOMAIN.COM

To authenticate Oracle using a Kerberos credential cache, specify java.security.krb5.conf=/<path>/krb5.conf; doNotPrompt=true; useTicketCache=true; ticketCache=<fully qualified credential cache file name.

For example, java.security.krb5.conf=/etc/krb5.conf; doNotPrompt=true; useTicketCache=true; ticketCache=/home/Jdbc/Security/krbcache/krb5cc'

Password

encrypted password

the password specified for the username (see Encrypted passwords)

Queue Size

Integer

2048

Quiesce Marker Table

String

QUIESCEMARKER

See Creating the QUIESCEMARKER table for Oracle Reader. Modify the default value if the quiesce marker table is not in the schema associated with the user specified in the Username. Three-part CDB / PDB names are not supported in this release.

Send Before Image

Boolean

True

set to False to omit before data from output

Set Conservative Range

Boolean

False

If reading from Oracle 19c, you have long-running transactions, and parallel DML mode is enabled (see Enable Parallel DML Mode), set this to True.

SSL Config

String

If using SSL with the Oracle JDBC driver, specify the required properties. Examples:

If using SSL for encryption only:

oracle.net.ssl_cipher_suites=
  (SSL_DH_anon_WITH_3DES_EDE_CBC_SHA,
    SSL_DH_anon_WITH_RC4_128_MD5, 
    SSL_DH_anon_WITH_DES_CBC_SHA)

If using SSL for encryption and server authentication:

javax.net.ssl.trustStore=
/etc/oracle/wallets/ewallet.p12;
javax.net.ssl.trustStoreType=PKCS12;
javax.net.ssl.trustStorePassword=********

If using SSL for encryption and both server and client authentication:

javax.net.ssl.trustStore=
/etc/oracle/wallets/ewallet.p12;
javax.net.ssl.trustStoreType=PKCS12;
javax.net.ssl.trustStorePassword=********;
javax.net.ssl.keyStore=/opt/Striim/certs;
javax.net.ssl.keyStoreType=JKS;
javax.net.ssl.keyStorePassword=********

Start SCN

String

Optionally specify an SCN from which to start reading (See Replicating Oracle data to another Oracle database for an example). Do not specify a start point prior to when supplemental logging was enabled.

If you are using schema evolution (see Handling schema evolution, set a Start SCN only if you are sure that there have been no DDL changes after that point.Handling schema evolution

See also Switching from initial load to continuous replication.

Start Timestamp

String

null

With the default value of null, only new (based on current system time) transactions are read. Specify a timestamp to read transactions that began after that time. The format is DD-MON-YYYY HH:MI:SS. For example, to start at 5:00 pm on July 15, 2017, specify 15-JUL-2017 17:00:00. Do not specify a start point prior to when supplemental logging was enabled.

If you are using schema evolution (see Handling schema evolution, set a Start Timestamp only if you are sure that there have been no DDL changes after that point.Handling schema evolution

Support PDB and CDB

Boolean

False

Set to True if reading from CDB or PDB.

Tables

String

The table or materialized view to be read (supplemental logging must be enabled as described in Configuring Oracle to use Oracle Reader) in the format <schema>.<table>. (If using Oracle 12c with PDB, use three-part names: <pdb>.<schema>.<table>.) Names are case-sensitive.

You may specify multiple tables and materialized views as a list separated by semicolons or with the % wildcard. For example, HR.% would read all tables in the HR schema. You may not specify a wildcard for the schema (that is, %.% is not supported). The % wildcard is allowed only at the end of the string. For example, mydb.prefix% is valid, but mydb.%suffix is not.

Unused columns are supported. Values in virtual columns will be set to null. If a table contains an invisible column, the application will terminate.

Table and column identifiers (names) may not exceed 30 bytes when using one-byte character sets. When using two-byte character sets, the limit is 15 characters.

Oracle character set AL32UTF8 (UTF-8) and character sets that are subsets of UTF-8, such as US7ASCII, are supported. Other character sets may work so long as their characters can be converted to UTF-8 by Striim.

See also Specifying key columns for tables without a primary key.

Transaction Buffer Disk Location

String

striim/LargeBuffer

See Transaction Buffer Type.

Transaction Buffer Spillover Size

String

100MB

When Transaction Buffer Type is Disk, the amount of memory that Striim will use to hold each in-process transaction before buffering it to disk. You may specify the size in MB or GB.

When Transaction Buffer Type is Memory, this setting has no effect.

Transaction Buffer Type

String

Disk

When Striim runs out of available Java heap space, the application will terminate. With Oracle Reader, typically this will happen when a transaction includes millions of INSERT, UPDATE, or DELETE events with a single COMMIT, at which point the application will terminate with an error message such as "increase the block size of large buffer" or "exceeded heap usage threshold."

To avoid this problem, with the default setting of Disk, when a transaction exceeds the Transaction Buffer Spillover Size, Striim will buffer it to disk at the location specified by the Transaction Buffer Disk Location property, then process it when memory is available.

When the setting is Disk and recovery is enabled (see Recovering applications), after the application halts, terminates, or is stopped the buffer will be reset, and during recovery any previously buffered transactions will restart from the beginning.Recovering applications

To disable transaction buffering, set Transaction Buffer Type to Memory.

Username

String

the username created as described in Configuring Oracle to use Oracle Reader; if using Oracle 12c or later with PDB, specify the CDB user (c##striim) 

Specifying key columns for tables without a primary key

If a primary key is not defined for a table, the values for all columns are included in UPDATE and DELETE records, which can significantly reduce performance. You can work around this by setting the Compression property to True and including the KeyColumns option in the Tables property value. The syntax is:

Tables:'<table name> KeyColumns(<COLUMN 1 NAME>,<COLUMN 2 NAME>,...)'

The column names must be uppercase. Specify as many columns as necessary to define a unique key for each row. The columns must be supported (see Oracle Reader and OJet data type support and correspondence) and specified as NOT NULL.

If the table has a primary key, or the Compression property is set to False, KeyColumns will be ignored.

OJet properties

Before you can use this adapter, Oracle must be configured as described in Configuring Oracle to use OJet.

Note

Before deploying OJet on a Striim server, install the Oracle Instant Client as described in Install the Oracle Instant Client in a Striim server.

Before deploying OJet on a Forwarding Agent, install the Oracle Instant Client as described in Install the Oracle Instant Client in a Forwarding Agent.

Before deploying an OJet application, see Runtime considerations when using OJet.

Striim provides templates for creating applications that read from Oracle and write to various targets. See Creating an application using a template for details.

The adapter properties are:

property

type

default value

notes

Bidirectional Marker Table

String

When performing bidirectional replication, the fully qualified name of the marker table (see Bidirectional replication). This setting is case-sensitive. This property appears only if your Striim cluster has been licensed for bidirectional support.

CDDL Action

enum

Process

See Handling schema evolution.

CDDL Capture

Boolean

False

See Handling schema evolution.

Compression

Boolean

False

If set to True, update operations for tables that have primary keys include only the primary key and modified columns, and delete operations include only the primary key. With the default value of False, all columns are included. See Oracle Reader example output for examples.

Set to True when OJet's output stream is the input stream of Cassandra Writer.

Connection Retry Policy

String

timeOut=30, retryInterval=30, maxRetries=3

With the default setting:

  • Striim will wait for the database to respond to a connection request for 30 seconds (timeOut=30).

  • If the request times out, Striim will try again in 30 seconds (retryInterval=30).

  • If the request times out on the third retry (maxRetries=3), a ConnectionException will be logged and the application will stop.

Negative values are not supported.

Connection URL

String

jdbc:oracle:oci:@<hostname>:<port>:<SID> or jdbc:oracle:oci:@<hostname>:<port>/<service name>

If using Oracle 12c or later with PDB, use the SID for the CDB service. (Note that with DatabaseReader and DatabaseWriter, you must use the SID for the PDB service instead.)

If Downstream Capture is enabled, specify the connection URL for the downstream database. Otherwise, specify the connection URL for the primary database.

If the specified connection URL is invalid, deployment will fail with an "ORA-12170: TNS:Connect timeout occurred" error. Note that this error will also occur if Striim is unable to connect to Oracle for any other reason, such as a network outage or the database being offline.

Downstream Capture

Boolean

False

If set to True, downstream capture is enabled.

Downstream Capture Mode

String

None

REAL_TIME: real time downstream capture mode.

ARCHIVED_LOG: archived log downstream capture mode.

Excluded Tables

String

Data for any tables specified here will not be returned. For example, if Tables uses a wildcard, data from any tables specified here will be omitted. Multiple table names (separated by semicolons) and wildcards may be used exactly as for Tables.

Filter Transaction Boundaries

Boolean

True

OJet Config

String

null

A JSON string that specifies the configuration of OJet reader components. All configuration values are disabled by default. It uses the following format:

{
    "<Component name>" : [
        "<Parameter name:value>"
    ]
    ,...
}

The components are OJET and CAPTURE. They have following configuration parameters.

  • OJET: queuesize: The maximum queue of events in memory

  • OJET: open_txn_delay_time: By default, this value is 0, which means that OJet will ignore any open transactions. If you prefer to halt if there are open transactions, set this to a positive value in milliseconds. If OJet detects open transactions, it will wait that number of milliseconds to retry. After three retries, if there are still open transactions, the application will halt.

    To list open transactions, while OJet is running, in the Striim console enter SHOW <namespace> <OJet adapter name> OPENTRANSACTIONS.

  • CAPTURE: fetch_lcr_attributes: The default value is False. Set to True to include the additional attributes (TxnId, Thread#, Username, rowid and transactionName in WAEvent.

For example:

{
   "OJET":[
      “queuesize:20000”
   ],
   "OJET":[
      “open_txn_delay_time:60000”
   ],
   "CAPTURE":[
      “fetch_lcr_attributes:true”
   ]
}

Password

encrypted password

The password for the Oracle user specified in Username.

Primary Database Connection URL

String

If Downstream Capture is enabled, specify the connection URL for the database that is shipping the redo logs to the downstream database. In an Active Data Guard environment, this may be the primary database or a standby.

Primary Database Password

encrypted password

If Downstream Capture is enabled , specify the password for the user specified in Primary Database Username.

Primary Database Username

String

If Downstream Capture is enabled, specify the Oracle user you created as described in Configuring Active Data Guard to use OJet.

Send Before Image

Boolean

True

Set to False to omit before data from output

SSL Config

String

If using SSL with the Oracle JDBC driver, specify the required properties using the syntax oracle.net.ssl_server_cert_dn=<server certificate domain name>;oracle.net.wallet_location="<path>. The wallet location must be accessible by Striim.

Start SCN

String

Optionally specify an SCN from which to start reading (See Replicating Oracle data to another Oracle database for an example). Do not specify a start point prior to when supplemental logging was enabled.

When you set a Start SCN, before running the application trigger a dictionary build by running this command:

EXECUTE DBMS_LOGMNR_D.BUILD( OPTIONS=>
  DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
SELECT CURRENT_SCN FROM V$DATABASE;

See also Switching from initial load to continuous replication.

Start Timestamp

String

null

With the default value of null, only new (based on current system time) transactions are read. If a timestamp is specified, transactions that began after that time are also read. The format is DD-MON-YYYY HH:MI:SS. For example, to start at 5:00 pm on July 15, 2017, specify 15-JUL-2017 17:00:00. Do not specify a start point prior to when supplemental logging was enabled.

Tables

String

The table or materialized view to be read (supplemental logging must be enabled as described in Configuring Oracle to use Oracle Reader) in the format <schema>.<table>. (If using Oracle 12c or later with PDB, use three-part names: <pdb>.<schema>.<table>.) Names are case-sensitive.

You may specify multiple tables and materialized views as a list separated by semicolons or with the % wildcard. For example, HR.% would read all tables in the HR schema. You may not specify a wildcard for the schema (that is, %.% is not supported). The % wildcard is allowed only at the end of the string. For example, mydb.prefix% is valid, but mydb.%suffix is not.

Unused columns are supported. Values in virtual columns will be set to null. If a table contains an invisible column, the application will terminate.

When reading from Oracle 11g or 12c Release 1 version 12.1, table and column identifiers (names) may not exceed 30 bytes. When using one-byte character sets, the limit is 30 characters. When using two-byte character sets, the limit is 15 characters.

When reading from Oracle 12c Release 2 version 12.2 or later, table and column identifiers (names) may not exceed 128 bytes. When using one-byte character sets, the limit is 128 characters. When using two-byte character sets, the limit is 64 characters.

Oracle character set AL32UTF8 (UTF-8) and character sets that are subsets of UTF-8, such as US7ASCII, are supported. Other character sets may work so long as their characters can be converted to UTF-8 by Striim.

See also Specifying key columns for tables without a primary key.

Transaction Age Spillover Limit

Integer

1000

OJet begins to spill messages from the Oracle server's memory to its hard disk for a particular transaction when the amount of time that any message in the transaction has been in memory exceeds the specified number of seconds.

Transaction Buffer Spillover Count

Integer

10000

OJet begins to spill messages from the Oracle server's memory to its hard disk for a particular transaction when the number of messages in memory for the transaction exceeds the specified number.

Username

String

The name of the OJet user created as described in Running the OJet setup script on Oracle or Configuring Active Data Guard to use OJet. if using Oracle 12c or later with PDB, specify the CDB user (c##striim).

Oracle Reader and OJet WAEvent fields

The output data type for both Oracle Reader and OJet is WAEvent.

metadata: for DML operations, the most commonly used elements are:

  • DatabaseName (OJet only): the name of the database

  • OperationName: COMMIT, BEGIN, INSERT, DELETE, UPDATE, or (when using Oracle Reader only) ROLLBACK

  • TxnID: transaction ID

  • TimeStamp: timestamp from the CDC log

  • TableName (returned only for INSERT, DELETE, and UPDATE operations): fully qualified name of the table

  • ROWID (returned only for INSERT, DELETE, and UPDATE operations): the Oracle ID for the inserted, deleted, or updated row

To retrieve the values for these elements, use the META function. See Parsing the fields of WAEvent for CDC readers.

data: for DML operations, an array of fields, numbered from 0, containing:

  • for an INSERT or DELETE operation, the values that were inserted or deleted

  • for an UPDATE, the values after the operation was completed

To retrieve the values for these fields, use SELECT ... (DATA[]). See Parsing the fields of WAEvent for CDC readers.

before (for UPDATE operations only): the same format as data, but containing the values as they were prior to the UPDATE operation

dataPresenceBitMap, beforePresenceBitMap, and typeUUID are reserved and should be ignored.

The following is a complete list of fields that may appear in metadata. The actual fields will vary depending on the operation type and other factors.

metadata property

present when using Oracle Reader

present when using OJet

comments

AuditSessionID

Audit session ID associated with the user session making the change

BytesProcessed

COMMIT_TIMESTAMP

the UNIX epoch time the transaction was committed, based on the Striim server's time zone: Oracle Reader returns this as jorg.joda.time.DateTime, OJet returns it as java.lang.Long

COMMITSCN

x

system change number (SCN) when the transaction committed

CURRENTSCN

system change number (SCN) of the operation

DBCommitTimestamp

the UNIX epoch time the transaction was committed, based on the Oracle server's time zone: Oracle Reader returns this as jorg.joda.time.DateTime, OJet returns it as java.lang.Long

DBTimestamp

the UNIX epoch time of the operation, based on the Oracle server's time zone: Oracle Reader returns this as jorg.joda.time.DateTime, OJet returns it as java.lang.Long

OperationName

user-level SQL operation that made the change (INSERT, UPDATE, etc.)

OperationType

the Oracle operation type

  • for OJet: DDL or DML

  • for Oracle Reader: COMMIT, DDL, DELETE, INSERT, INTERNAL, LOB_ERASE, LOB_TRIM, LOB_WRITE, MISSING_SCN, ROLLBACK, SELECT_FOR_UPDATE, SELECT_LOB_LOCATOR, START, UNSUPPORTED, or UPDATE

ParentTxnID

raw representation of the parent transaction identifier

PK_UPDATE

true if an UPDATE operation changed the primary key, otherwise false

RbaBlk

RBA block number within the log file

RbaSqn

sequence# associated with the Redo Block Address (RBA) of the redo record associated with the change

RecordSetID

Uniquely identifies the redo record that generated the row. The tuple (RecordSetID, SSN) together uniquely identifies a logical row change.

RollBack

1 if the record was generated because of a partial or a full rollback of the associated transaction, otherwise 0

ROWID

see comment

Row ID of the row modified by the change (only meaningful if the change pertains to a DML). This will be NULL if the redo record is not associated with a DML.

OJet: will be included only if fetch_lcr_attributes is specified in OJet Config

SCN

system change number (SCN) when the database change was made

SegmentName

name of the modified data segment

SegmentType

type of the modified data segment (INDEX, TABLE, ...)

Serial

serial number of the session that made the change

Serial#

see comment

serial number of the session that made the change; will be included only if fetch_lcr_attributes is specified in OJet Config

Session

session number of the session that made the change

Session#

see comment

session number of the session that made the change; will be included only if fetch_lcr_attributes is specified in OJet Config

SessionInfo

Information about the database session that executed the transaction. Contains process information, machine name from which the user logged in, client info, and so on.

SQLRedoLength

length of reconstructed SQL statement that is equivalent to the original SQL statement that made the change

SSN

SQL sequence number. The tuple (RecordSetID, SSN) together uniquely identifies a logical row change.

TableName

name of the modified table (in case the redo pertains to a table modification)

TableSpace

name of the tablespace containing the modified data segment.

ThreadID

ID of the thread that made the change to the database

Thead#

see comment

ID of the thread that made the change to the database; will be included only if fetch_lcr_attributes is specified in OJet Config

TimeStamp

the UNIX epoch time of the operation, based on the Striim server's time zone: Oracle Reader returns this as jorg.joda.time.DateTime, OJet returns it as java.lang.Long

TransactionName

name of the transaction that made the change (only meaningful if the transaction is a named transaction)

TxnID

raw representation of the transaction identifier

TxnUserID

UserName

name of the user associated with the operation

OracleReader simple application

The following application will write change data for all tables in myschema to SysOut. Replace the Username and Password values with the credentials for the account you created for Striim for use with LogMiner (see Configuring Oracle LogMiner) and myschema with the name of the schema containing the databases to be read.

CREATE APPLICATION OracleLMTest;
CREATE SOURCE OracleCDCIn USING OracleReader (
  Username:'striim',
  Password:'passwd',
  ConnectionURL:'203.0.113.49:1521:orcl',
  Tables:'myschema.%',
  FetchSize:1
) 
OUTPUT TO OracleCDCStream;

CREATE TARGET OracleCDCOut
  USING SysOut(name:OracleCDCLM)
  INPUT FROM OracleCDCStream;
END APPLICATION OracleLMTest;

Alternatively, you may specify a single table, such as myschema.mytable. See the discussion of Tables in Oracle Reader properties for additional examples of using wildcards to select a set of tables.

When troubleshooting problems, you can get the current LogMiner SCN and timestamp by entering mon <namespace>.<OracleReader source name>; in the Striim console.

Oracle Reader example output

OracleReader's output type is WAEvent. See WAEvent contents for change data for general information.

The following are examples of WAEvents emitted by OracleReader for various operation types. Note that many of the metadata values (see Oracle Reader and OJet WAEvent fields) are dependent on the Oracle environment and thus will vary from the examples below.

The examples all use the following table:

CREATE TABLE POSAUTHORIZATIONS (
  BUSINESS_NAME varchar2(30),
  MERCHANT_ID varchar2(100),
  PRIMARY_ACCOUNT NUMBER,
  POS NUMBER,CODE varchar2(20),
  EXP char(4),
  CURRENCY_CODE char(3),
  AUTH_AMOUNT number(10,3),
  TERMINAL_ID NUMBER,
  ZIP number,
  CITY varchar2(20),
  PRIMARY KEY (MERCHANT_ID));
COMMIT;

INSERT

If you performed the following INSERT on the table:

INSERT INTO POSAUTHORIZATIONS VALUES(
  'COMPANY 1',
  'D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu',
  6705362103919221351,
  0,
  '20130309113025',
  '0916',
  'USD',
  2.20,
  5150279519809946,
  41363,
  'Quicksand');
COMMIT;

Using LogMiner, the WAEvent for that INSERT would be similar to:

data: ["COMPANY 1","D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu","6705362103919221351","0","20130309113025",
"0916","USD","2.2","5150279519809946","41363","Quicksand"]
metadata: "RbaSqn":"21","AuditSessionId":"4294967295","TableSpace":"USERS","CURRENTSCN":"726174",
"SQLRedoLength":"325","BytesProcessed":"782","ParentTxnID":"8.16.463","SessionInfo":"UNKNOWN",
"RecordSetID":" 0x000015.00000310.0010 ","DBCommitTimestamp":"1553126439000","COMMITSCN":726175,
"SEQUENCE":"1","Rollback":"0","STARTSCN":"726174","SegmentName":"POSAUTHORIZATIONS",
"OperationName":"INSERT","TimeStamp":1553151639000,"TxnUserID":"SYS","RbaBlk":"784",
"SegmentType":"TABLE","TableName":"SCOTT.POSAUTHORIZATIONS","TxnID":"8.16.463","Serial":"201",
"ThreadID":"1","COMMIT_TIMESTAMP":1553151639000,"OperationType":"DML","ROWID":"AAAE9mAAEAAAAHrAAB",
"DBTimeStamp":"1553126439000","TransactionName":"","SCN":"72617400000059109745623040160001",
"Session":"105"}
before: null

UPDATE

If you performed the following UPDATE on the table:

UPDATE POSAUTHORIZATIONS SET BUSINESS_NAME = 'COMPANY 5A' where pos=0;
COMMIT;

Using LogMiner with the default setting Compression: false, the WAEvent for that UPDATE for the row created by the INSERT above would be similar to:

data: ["COMPANY 5A","D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu",null,null,null,
  null,null,null,null,null,null]
metadata: "RbaSqn":"21","AuditSessionId":"4294967295","TableSpace":"USERS","CURRENTSCN":"726177","
SQLRedoLength":"164","BytesProcessed":"729","ParentTxnID":"2.5.451","SessionInfo":"UNKNOWN",
"RecordSetID":" 0x000015.00000313.0010 ","DBCommitTimestamp":"1553126439000","COMMITSCN":726178,
"SEQUENCE":"1","Rollback":"0","STARTSCN":"726177","SegmentName":"POSAUTHORIZATIONS",
"OperationName":"UPDATE","TimeStamp":1553151639000,"TxnUserID":"SYS","RbaBlk":"787",
"SegmentType":"TABLE","TableName":"SCOTT.POSAUTHORIZATIONS","TxnID":"2.5.451","Serial":"201",
"ThreadID":"1","COMMIT_TIMESTAMP":1553151639000,"OperationType":"DML","ROWID":"AAAE9mAAEAAAAHrAAB",
"DBTimeStamp":"1553126439000","TransactionName":"","SCN":"72617700000059109745625006240000",
"Session":"105"}
before: ["COMPANY 1","D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu",null,null,null,null,null,null,null,
null,null]

Note that when using LogMiner the before section contains a value only for the modified column. You may use the IS_PRESENT() function to check whether a particular field value has a value (see Parsing the fields of WAEvent for CDC readers).

With Compression: true, only the primary key is included in the before array:

before: [null,"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu",null,null,null,
  null,null,null,null,null,null]

In all cases, if OracleReader's SendBeforeImage property is set to False, the before value will be null.

DELETE

If you performed the following DELETE on the table:

DELETE from POSAUTHORIZATIONS where pos=0;
COMMIT;

Using LogMiner with the default setting Compression: false, the WAEvent for a DELETE for the row affected by the UPDATE above would be:

data: ["COMPANY 5A","D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu","6705362103919221351","0","20130309113025",
"0916","USD","2.2","5150279519809946","41363","Quicksand"]
metadata: "RbaSqn":"21","AuditSessionId":"4294967295","TableSpace":"USERS","CURRENTSCN":"726180",
"SQLRedoLength":"384","BytesProcessed":"803","ParentTxnID":"3.29.501","SessionInfo":"UNKNOWN",
"RecordSetID":" 0x000015.00000315.0010 ","DBCommitTimestamp":"1553126439000","COMMITSCN":726181,
"SEQUENCE":"1","Rollback":"0","STARTSCN":"726180","SegmentName":"POSAUTHORIZATIONS",
"OperationName":"DELETE","TimeStamp":1553151639000,"TxnUserID":"SYS","RbaBlk":"789",
"SegmentType":"TABLE","TableName":"SCOTT.POSAUTHORIZATIONS","TxnID":"3.29.501","Serial":"201",
"ThreadID":"1","COMMIT_TIMESTAMP":1553151639000,"OperationType":"DML","ROWID":"AAAE9mAAEAAAAHrAAB",
"DBTimeStamp":"1553126439000","TransactionName":"","SCN":"72618000000059109745626316960000",
"Session":"105"}
before: null

With Compression: true, the data array would be:

data: [null,"D6RJPwyuLXoLqQRQcOcouJ26KGxJSf6hgbu",null,null,null,null,null,null,null,null,null]

Note that the contents of data and before are reversed from what you might expect for a DELETE operation. This simplifies programming since you can get data for INSERT, UPDATE, and DELETE operations using only the data field.

OJet simple application

Before deploying an OJet application, note the prerequisites discussed in Runtime considerations when using OJet.

The following application will write change data for all tables in myschema to SysOut. Replace the Username and Password values with the credentials for the account you created for Striim for use with LogMiner (see Configuring Oracle LogMiner) and myschema with the name of the schema containing the databases to be read.

CREATE APPLICATION OJetTest;
CREATE SOURCE OracleCDCIn USING Ojet (
  Username:'striim',
  Password:'passwd',
  ConnectionURL:'203.0.113.49:1521:orcl',
  Tables:'myschema.%'
) 
OUTPUT TO OracleCDCStream;

CREATE TARGET OracleCDCOut
  USING SysOut(name:OracleCDCLM)
  INPUT FROM OracleCDCStream;
END APPLICATION OJetTest;

Alternatively, you may specify a single table, such as myschema.mytable. See the discussion of Tables in OJet properties for additional examples of using wildcards to select a set of tables.

Oracle Reader and OJet data type support and correspondence

Oracle type

TQL type when using Oracle Reader

TQL type when using OJet

ADT

not supported, values will be set to null

not supported, application will halt if it reads a table containing a column of this type

BFILE

not supported, values will be set to null

values for a column of this type will contain the file names, not their contents

BINARY_DOUBLE

Double

Double

BINARY_FLOAT

Float

Float

BLOB

String (a primary or unique key must exist on the table)

An insert or update containing a column of this type generates two CDC log entries: an insert or update in which the value for this column is null, followed by an update including the value.

When reading from Oracle 19c, values for this type may be incorrect when (1) a table contains multiple columns of this type and operations are performed on more than one of those columns in the same transaction or (2) multiple tables containing columns of this type are being read and different user sessions are performing operations on them. If you encounter either of these issues, Contact Striim support for assistance.

Byte[]

CHAR

String

String

CLOB

string (a primary or unique key must exist on the table)

An insert or update containing a column of this type generates two CDC log entries: an insert or update in which the value for this column is null, followed by an update including the value.

When reading from Oracle 19c, values for this type may be incorrect when (1) a table contains multiple columns of this type and operations are performed on more than one of those columns in the same transaction or (2) multiple tables containing columns of this type are being read and different user sessions are performing operations on them. If you encounter either of these issues, Contact Striim support for assistance.

String

DATE

DateTime

java.time.LocalDateTime

FLOAT

String

String

INTERVALDAYTOSECOND

string (always has a sign)

String (unsigned)

INTERVALYEARTOMONTH

string (always has a sign)

String (unsigned)

JSON

not supported, values will be set to null

not supported, application will halt if it reads a table containing a column of this type

LONG

Results may be inconsistent. Oracle recommends using CLOB instead.

String

LONG RAW

Results may be inconsistent. Oracle recommends using CLOB instead.

Byte[]

NCHAR

String

String

NCLOB

String (a primary or unique key must exist on the table)

String (a primary or unique key must exist on the table)

NESTED TABLE

not supported, application will halt if it reads a table containing a column of this type

not supported, application will halt if it reads a table containing a column of this type

NUMBER

String

String

NVARCHAR2

String

String

RAW

String

Byte[]

REF

not supported, application will halt if it reads a table containing a column of this type

not supported, application will halt if it reads a table containing a column of this type

ROWID

String

values for a column of this type will be set to null

SD0_GEOMETRY

SD0_GEOMETRY values will be set to null

Known issue DEV-20726: if a table contains a column of this type, the application will terminate

TIMESTAMP

DateTime

java.time.LocalDateTime

TIMESTAMP WITH LOCAL TIME ZONE

DateTime

java.time.LocalDateTime

TIMESTAMP WITH TIME ZONE

DateTime

java.time.ZonedDateTime

UDT

not supported, values will be set to null

not supported, application will halt if it reads a table containing a column of this type

UROWID

not supported, a table containing a column of this type will not be read

not supported due to Oracle bug 33147962, application will terminate if it reads a table containing a column of this type

VARCHAR2

String

String

VARRAY

Supported by LogMiner only in Oracle 12c and later. Required Oracle Reader settings:

  • Committed Transactions: True

  • Dictionary Mode: OnlineCatalog

  • Undo Retention: Set to an interval long enough that VARRAY values will be available when Oracle Reader attempts to read them. If the interval is too short and the data is no longer in the log, Oracle Reader will terminate with java.sql.SQLException "ORA-30052: invalid lower limit snapshot expression."

Limitations:

  • Tables containing VARRAY values must have primary keys.

  • The VARRAY must contain only elements that can be returned as Java primitive types.

  • The VARRAY's type name must be unique to its schema. If the same VARRAY type name is used in another schema, Oracle Reader will terminate with java.sql.SQLException "ORA-01427: single-row subquery returns more than one row."

  • Oracle's UNDO_RETENTION policy must be set to an interval long enough that VARRAY values will be available when Oracle Reader attempts to retrieve them with a Flashback (SELECT AS OF) query. If the interval is too short and the data is no longer available, Oracle Reader will terminate with java.sql.SQLException "ORA-30052: invalid lower limit snapshot expression." For more information, see the documentation for UNDO_RETENTION for your version of Oracle.

When the output of an Oracle Reader source is the input of a target using XML Formatter, the formatter's Format Column Value As property must be set to xmlelement for VARRAY data to be formatted correctly.

known issue DEV-29799: if a table contains a column of this type, the application will terminate

XMLTYPE

Supported only for Oracle 12c and later. When DictionaryMode is OnlineCatalog, values in any XMLType columns will be set to null. When DictionaryMode is OfflineCatalog, reading from tables containing XMLType columns is not supported.

String

Handling long-running open transactions in Oracle

Typically, database transactions are completed quickly, within a few seconds. Occasionally, however, a database transaction may be open for a long time, minutes, hours, or even days, before it completes. Potential causes of long-running open transactions include:

  • A user started a transaction but forgot to commit it or roll it back.

  • A SQL client application opens a transaction automatically when connecting to the database and is keeps it open until disconnected.

  • Some batch jobs may last for hours. Ideally such jobs should be run only during maintenance windows.

Long-running open transactions can have several impacts on Striim:

  • When long-running open transactions contain many operations (typically batch jobs), CPU, memory, and disk usage may increase, resulting in increased lag in writing to targets. In the worst case, Striim may run out of memory or disk space and halt.

  • Recovery (see Recovering applications) from a crash or planned downtime may take longer as Striim will restart from the point where the oldest open transaction was started. In the worst case, the redo / archive log files may no longer be available and recovery will fail.Recovering applications

  • Quiesce may fail because the transaction can not be flushed within the 30-second timeout.

Therefore, you must monitor your applications for signs of long-running open transactions and take prompt steps to address them to minimize impact to your Striim environment. Where possible, Striim strongly recommends that you modify your upstream Oracle applications and configurations to avoid long-running open transactions.

Monitoring long-running open transactions

Your applications may be impacted by long-running open transactions if you see signs such as low disk space or memory for the Striim server, the recovery checkpoint not advancing, recovery taking a long time, or inability to quiesce an application.

To determine whether you indeed have long-running open transactions, use the MON <Oracle Reader name> command. The output will include an entry similar to the following:

Oldest Open Transactions            │ 
  [{"5.21.21991":{"# of 
    Ops":1,"CommitSCN":"null","Sequence #":"1","StartSCN":"60601569",
   "Rba block #":"3847","Thread #":"1","TimeStamp":"2023-03-02T00:47:10.000+05:30"}}]    

This tells us that the transaction with StartSCN 60601569 has been running since March 3, 2023. "# of Ops":1 suggests that this transaction is open because a user didn't commit it or roll it back or because a SQL client application is behaving inappropriately. An open transaction that might cause out-of-memory or disk errors would typically have a large number of operations.

Alternatively, use the SHOW command (see Viewing open transactions).

Best practices for managing long-running open transactions

Ideally you should avoid long-running open transactions by not running queries that create them. This may involve breaking large batch jobs up into multiple smaller jobs that can complete in less time. You should also run batch jobs during maintenance windows or at times when the database not at its busiest.

You should also educate users with permission to connect to Oracle not to leave transactions open without committing them or rolling them back, and not to use client applications in such a way that they leave transactions open.

Handling long-running open transactions in Striim applications that use Oracle Reader

When there are long-running open transactions with many operations, Oracle Reader's transaction buffer helps avoid out-of-memory errors by buffering transactions to disk on the Striim server. By default, Oracle Reader is configured so that when a transaction's memory usage exceeds 100 MB (Transaction Buffer Spillover Size), it is buffered to disk (Transaction Buffer Type) on the Striim server, and processing is resumed when memory is available. When recovery is enabled, after the application halts, terminates, or is stopped the buffer will be reset, and during recovery any previously buffered transactions will restart from the beginning.

When using Oracle Reader, it is possible to discard a long-running open transaction using Striim's DISCARD TRANSACTION command. This clears a specified open transaction from the transaction buffer, sends the buffered operations to downstream components such as CQs and writers, and makes the recovery checkpoint progress. Any further operations that are part of the discarded transaction will be ignored. We strongly recommend that you Contact Striim support before using DISCARD TRANSACTION since it may result in some data loss.

Handling long-running open transactions in Striim applications that use OJet

When there are long-running open transactions with many operations, OJet's transaction buffer helps avoid out-of-memory errors by buffering transactions to disk on the Oracle server. By default, OJet is configured so that when any one of the statements in a transaction has been in memory for 1000 seconds (Transaction Age Spillover Limit) or the total number of operations exceeds 10,000 (Transaction Buffer Spillover Count), the transaction is buffered to disk on the Oracle server, and processing is resumed when memory is available. When recovery is enabled, after the application halts, terminates, or is stopped, recovery will begin from the oldest open transaction. If you need assistance in dealing with long-running open transactions, Contact Striim support .

Runtime considerations when using Oracle Reader

  • Starting an Oracle Reader source automatically opens an Oracle session for the user specified in the Username property.

    • The session is closed when the source is stopped.

    • If a running Oracle Reader source fails with an error, the session will be closed.

  • Closing a PDB source while Oracle Reader is running will cause the application to terminate.

  • When Dictionary Mode is set to Offline Catalog, you should run the following command every six hours, or every three hours if you expect many large transactions:

    EXECUTE DBMS_LOGMNR_D.BUILD( OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

Runtime considerations when using OJet

  • When reading from Oracle 11g, the name of an OJet reader must not exceed 18 characters. When reading from Oracle 12c or higher, the name must not exceed 118 characters

  • Schema evolution does not support tables containing ROWID columns.

  • You must execute the following command before you create or deploy an OJet application. You should run the command again every six hours, or every three hours if you expect many large transactions.

    EXECUTE DBMS_LOGMNR_D.BUILD( OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
  • If there may be open transactions when you start an OJet application, run the following command to get the current SCN, and specify it as the Start Scn value in the application's OJet properties.

    SELECT CURRENT_SCN FROM V$DATABASE;
  • If reading from a downstream server, any time you stop OJet or it has terminated or halted, you must enter the following command to reconnect the Remote File Services (RFS):

    SELECT THREAD#, SEQUENCE#, RESETLOG_ID FROM V$MANAGED_STANDBY WHERE process = 'RFS';

    That should return something similar to the following, indicating that the RFS connection is active:

    rfs (PID:18798): krsr_rfs_atc: Identified database type as 'PRIMARY': Client is ASYNC (PID:10829)
Using the SHOW command

Use the SHOW command to view OJet status or memory usage.

SHOW <OJet source name> MEMORY [ DETAILS ]
SHOW <OJet source name> STATUS [ DETAILS ]

The STATUS output includes:

  • APPLIED_SCN - all changes below this SCN have been

  • CAPTURE_TIME - Elapsed time (in hundredths of a second) scanning for changes in the redo log since the capture process was last started

  • CAPTURED_SCN - SCN of the last redo log record scanned

  • ENQUEUE_TIME - Time when the last message was enqueued

  • FILTERED_SCN - SCN of the low watermark transaction processed

  • FIRST_SCN indicates the lowest SCN to which the capture can be repositioned

  • LCR_TIME - Elapsed time (in hundredths of a second) creating LCRs since the capture process was last started

  • MESSAGES_CAPTURED - Total number of redo entries passed by LogMiner to the capture process for rule evaluation since the capture process last started

  • MESSAGES_ENQUEUED - Total number of messages enqueued since the capture process was last started

  • OLDEST_SCN - Oldest SCN of the transactions currently being processed

  • REDO_MINED - The total amount of redo data mined (in bytes) since the capture process last started

  • REDO_WAIT_TIME - Elapsed time (in hundredths of a second) spent by the capture process in the WAITING FOR REDO state

  • RESTART_SCN - The SCN from which the capture process started mining redo data when it was last started

  • RULE_TIME - Elapsed time (in hundredths of a second) evaluating rules since the capture process was last started

  • START_SCN from which the capture process starts to capture changes.

Viewing open transactions

SHOW <namespace>.<Oracle Reader or OJet source name> OPENTRANSACTIONS
  [ -LIMIT <count> ]
  [ -TRANSACTIONID '<transaction ID>,...']
  [ DUMP | -DUMP '<path>/<file name>' ];

This console command returns information about currently open Oracle transactions. The namespace may be omitted when the console is using the source's namespace.

With no optional parameters, SHOW <source> OPENTRANSACTIONS; will display summary information for up to ten open transactions (the default LIMIT count is 10). Output for OJet will not include Rba block or Thread #.

╒══════════════════╤════════════╤════════════╤══════════════════╤════════════╤════════════╤═══════════════════════════════════════╕
│ Transaction ID   │ # of Ops   │ Sequence # │ StartSCN         │ Rba block  │ Thread #   │ TimeStamp                             │
├──────────────────┼────────────┼────────────┼──────────────────┼────────────┼────────────┼───────────────────────────────────────┤
│ 3.5.222991       │ 5          │ 1          │ 588206203        │ 5189       │ 1          │ 2019-04-05T21:28:51.000-07:00         │
│ 5.26.224745      │ 1          │ 1          │ 588206395        │ 5189       │ 1          │ 2019-04-05T21:30:24.000-07:00         │
│ 8.20.223786      │ 16981      │ 1          │ 588213879        │ 5191       │ 1          │ 2019-04-05T21:31:17.000-07:00         │
└──────────────────┴────────────┴────────────┴──────────────────┴────────────┴────────────┴───────────────────────────────────────┘
  • To show all open transactions, add -LIMIT ALL.

  • Add -TRANSACTIONID with a comma-separated list of transaction IDs (for example, -TRANSACTIONID '3.4.222991, 5.26.224745') to return summary information about specific transactions in the console and write the details to OpenTransactions_<timestamp> in the current directory.

  • Add DUMP to show summary information in the console and write the details to OpenTransactions_<timestamp> in the current directory.

  • Add -DUMP [<path>/<file name>' to show summary information in the console and write the details to the specified file.

File lineage in Oracle

See SHOW for instructions on using SHOW LINEAGE. The file lineage report is also available via the get/applications/applicationMetadata/olm endpoint in the Striim application management REST API (see Using the Striim Application Management REST API) and for deployed and running Oracle Readers in the Flow Designer.

OLM_button.png

By default, this feature is disabled. See Enabling file lineage.

For OracleReader, the file lineage data includes:

File Name

the archive log file name, for example, /oracle_archivelogs/1_4009_890952357.dbf (the NAME value taken from V$ARCHIVED_LOG based on log sequence number and thread number) or ONLINE REDO when OracleReader is reading from redo logs

Status

PROCESSING when OracleReader is reading the file, COMPLETED when it has finished

Directory Name

the directory in which the file is located

File Creation Time

the time Striim created the file lineage entry for the file

Number Of Events

the number of events OracleReader has read from the file; if the application has been restarted, OracleReader may not start reading from the first event, so the number will be less than the total number of events in the file

First Event Timestamp

FIRST_TIME value taken from V$ARCHIVED_LOG based on log sequence number and thread number

Last Event Timestamp

NEXT_TIME for the file taken from V$ARCHIVED_LOG based on log sequence number and thread number

Wrap Number

the number of times OracleReader resumed reading the file after the application was restarted

SequenceNumber

the unique sequence number from the file name, for example, for /oracle_archivelogs/1_4009_890952357.dbf it would be 4009

ThreadID

the thread number associated with the log sequence number and file name

FirstChangeNumber

FIRST_CHANGE# for the file taken from V$ARCHIVED_LOG based on log sequence number and thread number

LastChangeNumber

NEXT_CHANGE# for the file taken from V$ARCHIVED_LOG based on log sequence number and thread number

When OracleReader is reading from redo logs, file-related values are NOT APPLICABLE or N/A.

For more information, see LOG_ARCHIVE_FORMAT in Oracle's Database Reference.