Skip to main content

Release notes

The following are the release notes for Striim Cloud 5.4.0.1.

Changes that may require modification of your TQL code, workflow, or environment

  • As part of upgrading to Striim 5.4, any Kafka Reader and Kafka Writer adapters in created or quiesced applications will be upgraded to the new versions of the adapters and corresponding connection profiles will be created. However, after upgrading you import TQL exported from Striim 5.2 or earlier, any Kafka Reader and Kafka Writer components will use the deprecated version Kafka 2.1 versions of the adapters. To use the new versions of the adapters, edit the TQL before importing to use the new properties, and create a connection profile with the appropriate properties.

  • Starting with release 5.2.4, the connection profile endpoint type BigQuery Metastore Catalog has been renamed BigLake Metastore Catalog, to reflect the name change by Google. On upgrading, BigQuery Metastore Catalog connection profiles will automatically be converted to BigLake Metastore Catalog connection profiles.

  • Starting with release 5.2.0, the syntax for defining external sources using the CREATE EXTERNAL SOURCE statement has been updated to simplify configuration and support additional source types. For details and updated examples, see CREATE EXTERNAL SOURCE.CREATE EXTERNAL SOURCE

  • The event AIData data model for Sentinel was modified in 5.0.6, with the names of two fields updated from 5.0.2. The number of unique sensitive data identifiers is now specified by numIdentifiers (formerly, Identifiers in 5.0.2), and the number of occurrences of all sensitive data identifiers is now specified by numOccurrences (formerly, Occurrences in 5.0.2).

  • Starting with release 5.2.4, Striim Cloud uses JDK 17.

    • When upgrading from Striim Cloud 4.x to Striim Cloud 5.2.4, you must install JDK 17 in the environment for any Forwarding Agent. See Upgrading Striim Cloud.Upgrading Striim Cloud

    • If you use open processors, you may need to revise and recompile them to be compatible with with JDK 17. See Updating open processors for JDK 17.

  • Starting with Striim 4.2.0.22, the internal Kafka version is 3.6.2. When you upgrade from an earlier release, the internal Kafka instance will be upgraded. If you have created any Kafka property sets for the internal Kafka instance, you will need to edit their kafkaversion property accordingly.

  • Kafka Reader and Kafka Writer versions 0.8, 0.9, and 0.10 are no longer supported. When you upgrade from Striim 4.x to Striim 5.x, any applications using those versions will automatically be updated to use Kafka Reader 2.1 or Kafka Writer 2.1 (which are backward-compatible with the old versions).

  • As part of upgrading from Striim 4.x to Striim 5.x, a Snowflake Writer with Streaming Upload enabled and Authentication Type set to OAuth or Password will have that property value switched to Key Pair.

  • Starting with release 4.2.0, TRUNCATE commands are supported by schema evolution (see Handling schema evolution). If you do not want to delete events in the target (for example, because you are writing to a data warehouse in Append Only mode), precede the writer with a CQ with the select statement SELECT * FROM <input stream name> WHERE META(x, OperationName) != “Truncate”; (replacing <input stream name> with the name of the writer's input stream). Note that there will be no record in the target that the affected events were deleted.Handling schema evolution

  • OJet requires Oracle Instant Client version 21.6. See Install the Oracle Instant Client in a Forwarding Agent.

  • MongoDB Reader no longer supports MongoDB versions prior to 3.6.

  • MongoDB Reader reads from MongoDB change streams rather than the oplog. Applications created in releases prior to 4.2.0 will continue to read from the oplog after upgrading to 4.2.1. To switch to change streams:

    1. Export the application to a TQL file.

    2. Drop the application.

    3. Revise the TQL as necessary to support new features, for example, changing the Connection URL to read from multiple shards.

    4. Import the TQL to recreate the application.

  • Databricks Writer's Upload Policy's default eventcount value has been increased from 10000 to 100000 and the Hostname property has been removed since the host name can be retrieved using. the connection URL. (See Databricks Writer properties.)

Known issues in 5.4.0.1

  • DEV-44356: "Test Connection" link not working in vaults

  • DEV-52582: Striim UI doesn't allow multiple HPNonStopSQLMPReader tables to be specified

  • DEV-52892: exported TQL with open processor cannot be imported

  • DEV-54085: Zendesk Reader - Initial Load application does not load archived tickets

  • DEV-54264: Databricks stage tables not being dropped

  • DEV-54307: Kafka Writer converts all source column names to uppercase

  • DEV-54322: OJet - "Failed to access the source database from downstream using db link"

  • DEV-54489: Applications stuck in inconsistent state — unable to undeploy or drop app

  • DEV-55038: BigQueryWriter taking more time to start

Customer-reported issues fixed in release 5.4.0.1

  • DEV-28135: Database Writer / SQL Server loops on Datatype conversion error instead of CRASH

  • DEV-39178: UI - Metadata Manager Page not responding when metadata repository is large

  • DEV-50964: Striim.server.log flooded with exception (*org.apache.hive.service.cli.HiveSQLException:Invalid SessionHandle: SessionHandle) related to DeltaLakeWriter

  • DEV-50968: CDC App Got Stuck In Stopping Status Even After Agent Is Shutdown

  • DEV-51904: MSJet - Failed with "java.lang.RuntimeException: Cannot create type xxx "

  • DEV-51936: PostgreSQL > PostgreSQL: timestamp with infinity value hit error: timestamp out of range: "292269055-12-02 23:00:00.0"

  • DEV-52293: service offline

  • DEV-52355: MySQL > MySQL datetime & timestamp values doesn't match and shows timezone difference

  • DEV-52413: MSJet process spawned from Agent running as service exceeds memory limits defined in agent.conf/wrapper.conf

  • DEV-52458: Metering thread can cause striim version 5.0.6 to be slow

  • DEV-52459: Update "analyze metadata" with additional details on the unhealthy components

  • DEV-52740: OJet application with large number of tables

  • DEV-52807: OJET - Unable to drop app, no suitable driver found for - with ssl and property variables

  • DEV-52808: OJET - Test connection is not working when property variable is used

  • DEV-52822: BigqueryWriter: "Query error: UPDATE/MERGE must match at most one source row for each target row at [1:407]", "reason": "invalidQuery" }

  • DEV-52864: SSO - Firstname and Lastname not populating from Entra

  • DEV-52896: SnowflakeWriter - Missed soft-delete record that converted from ChangeOperationToInsert

  • DEV-52951: SpannerBatchReader doesn't pause under back pressure and continues to read data causing memory increase

  • DEV-52989: Server log not detailing the document and collection where it fails

  • DEV-53018: Connection Profile for BigQuery not saving with Google Secrets Manager

  • DEV-53035: MSJet - Missing the timestamp datatype column

  • DEV-53067: SchemaRegistry SubjectNameMapping is replacing the user supplied character

  • DEV-53079: UI File upload doesn't allow files starting with . (dot) considering it as hidden files

  • DEV-53097: Striim Failing to Start Due to Invalid Index Name Starting with Underscore

  • DEV-53104: PostgreSQL > Oracle hit error for timestamp in BC

  • DEV-53357: UploadedFiles in UI displays files for a split second and it disappears

  • DEV-53364: JMXReader hits NPE

  • DEV-53387: InitialLoad: PostgreSQL > PostgreSQL bit datatype fails

  • DEV-53401: Databricks :App crash frequently with Error [TABLE_OR_VIEW_NOT_FOUND] - pre retry issue

  • DEV-53416: PostgreSQL Reader - Failed with error "Invalid format: "infinity""

  • DEV-53498: OJET - Downstream OJET failed with "ORA-01291: missing log file" when primary db is RAC

  • DEV-53529: Mysql to Snowflake App hung until restart

  • DEV-53530: Mysql - Mysql Date value data mismatch

  • DEV-53607: SSO configuratio accepts base64 Certificate Based on File Extension (.pem) instead of validating Content

  • DEV-53634: DatabaseWriter (SQLServer): Performance Degradation on Wide Tables Due to Decimal and Smallint Columns

  • DEV-53761: Apps failed with "NOT ENOUGH SERVER"

  • DEV-53794: GG Trail reader failing with "com.webaction.Exception.GGExternalException: Unsupported TDR Version - { 19 }"

  • DEV-53812: Initial load app failed with "TABLE_OR_VIEW_ALREADY_EXISTS" - Connection not initialised on restart

  • DEV-53813: UI is behaving weirdly

  • DEV-53856: Metrics exposed by striim does not include the node name & some of the metrics are missing

  • DEV-53876: DDLs on uninterested tables should not results in messages in logs and UI

  • DEV-53894: Databricks - Failed with [TABLE_OR_VIEW_NOT_FOUND] on Staging Table - Problem in Pre-Retry

  • DEV-53906: GUI Spins and Never Loads after the first login

  • DEV-54006: Missing Single Sign On Tab when the Hostname in DNS Name in Striim UI

  • DEV-54089: Oracle CDC app did not Halted/Terminated when struck with no log file found

  • DEV-54241: mysql terminated: java.lang.ArrayIndexOutOfBoundsException

  • DEV-54250: PIL :: Patch request for https://webaction.atlassian.net/browse/DEV-53401

  • DEV-54266: Salesforce CDC Reader: When the Start Time is not epoch, error message mentions it as fetch size incorrect

  • DEV-54272: java.lang.IllegalArgumentException: No enum constant com.webaction.runtime.meta.MetaInfo.StatusInfo.Status.TERMINATED

  • DEV-54288: Kafka Writer : ClusterAuthorizationException: Cluster authorization failed

  • DEV-54309: App gets stuck (no reads or writes) after loading 1.159 Million records into Fabric Data Warehouse. Source has 5 Million records

  • DEV-54370: App got stuck in Stopping Status

  • DEV-54407: Striim Email Alert not sending the alerts

  • DEV-54419: UI Doesn't Allow for Setting Multiple Fields as Keys for a Typed Event

  • DEV-54477: Inconsistent app execution When processing CSV fils using S3reader :DSVParser

  • DEV-54480: striim.server.log contains repeated message(-WARN Thread

  • DEV-54501: Mariadb terminated: java.lang.ArrayIndexOutOfBoundsException

  • DEV-54514: DIGILE/MBSB : IL App's remains in running state after data load completion

  • DEV-54579: OJet does not use unique constraint as key for newly created table

  • DEV-54686: MSJet Mine - Implement retry when "No backup found that contains LSN"

  • DEV-54787: Jira reader: HTTP [40002] Too many operands passed to JQL query

  • DEV-54789: IL (oracle -> databricks): 'Create Schema' causes mismatch for NUMBER and DATE

  • DEV-54839: Incomplete “Suggested Actions” section in exception message

  • DEV-54843: MSJet Mine mode - Frequent Query timeout, Stops hangs on larger backup processing

  • DEV-54932: App stops or terminates, not closing db session.

  • DEV-55020: Potential thread leak for USER_NOTIFICATION_QUEUEback-end<token>

  • DEV-55096: Oracle19c (not container, primary) -> Oracle21c (downstream) setup

  • DEV-55224: Destroy Connection Object in native layer from the previous run of MSJet for the same app name.

  • DEV-55242: Key Attribute Showing NULL for ZipID in Kafka Offset Explorer

  • DEV-55266: Special character and case-sensitivity issues in Kafka message key and value schema field names

  • DEV-55319: Password is printed in agent log as clear text format

Known issues from past releases

  • DEV-5701: Dashboard queries not dropped with the dashboard or overwritten on import

    When you drop a dashboard, its queries are not dropped. If you drop and re-import a dashboard, the queries in the JSON file do not overwrite those already in Striim.

    Workaround: drop the namespace or LIST NAMEDQUERIES, then manually drop each one.

  • DEV-8142: SORTER objects do not appear in the UI

  • DEV-8933: DatabaseWriter shows no error in UI when MySQL credentials are incorrect

    If your DatabaseWriter Username or Password values are correct, you will see no error in the UI but no data will be written to MySQL. You will see errors in webaction.server.log regarding DatabaseWriter containing "Failure in Processing query" and "command denied to user."

  • DEV-11305: DatabaseWriter needs separate checkpoint table for each node when deployed on multiple nodes

  • DEV-17653: Import of custom Java function fails

    IMPORT STATIC may fail. Workaround: use lowercase import static.

  • DEV-19903: When DatabaseReader Tables property uses wildcard, views are also read

    Workaround: use Excluded Tables to exclude the views.

Third-party APIs, clients, and drivers used by readers and writers

  • Azure Event Hub Writer uses the azure-eventhubs API version 3.0.2.

  • Azure Synapse Writer uses the bundled SQL Server JDBC driver.

  • BigQuery Writer uses google-cloud-bigquery version 2.42.3 and google-cloud-bigquerystorage version 3.9.1.

  • Cassandra Cosmos DB Writer uses cassandra-jdbc-wrapper version 3.1.0

  • Cassandra Writer uses cassandra-java-driver version 3.6.0.

  • Cloudera Hive Writer uses hive-jdbc version 3.1.3.

  • CosmosDB Reader uses Microsoft Azure Cosmos SDK for Azure Cosmos DB SQL API 4.54.0.

  • CosmosDB Writer uses documentdb-bulkexecutor version 2.3.0.

  • Databricks Writer uses Databricks JDBC driver version 2.6.29. It also uses the following:

    • for authentication using Azure Active Directory and staging in ADLS Gen2: azure-identity version 1.5.3

    • for staging in ADLS Gen2: azure-storage-blob version 12.18.0

    • for staging in DBFS: databricks-rest-client version 3.2.2

    • for staging in S3: aws-java-sdk-s3 version 1.12.589 and aws-java-sdk-sts version 1.11.320

  • Derby: the internal Derby instance is version 10.9.1.0.

  • Elasticsearch: the internal Elasticsearch cluster is version 5.6.4.

  • Fabric Data Warehouse Writer uses mssql-jdbc version 12.8.1.jre8, msal4j version 1.17.1, and azure-storage version 4.4.0.

  • Fabric Lakehouse File Writer uses httpclient version 4.5.13.

  • GCS Writer uses the google-cloud-storage client API version 1.106.0.

  • Google PubSub Writer uses the google-cloud-pubsub client API version 1.110.0.

  • Hazelcast is version 5.3.5.

  • HBase Writer uses HBase-client version 2.4.13.

  • Hive Writer and Hortonworks Hive Writer use hive-jdbc version 3.1.3.

  • The HP NonStop readers use OpenSSL 1.0.2n.

  • JMS Reader and JMS Writer use the JMS API 1.1.

  • Kafka: the internal Kafka cluster is version 3.6.2.

  • Kudu: the bundled Kudu Java client is version 1.13.0.

  • Kinesis Writer uses aws-java-sdk-kinesis version 1.11.240.

  • MapR DB Writer uses hbase-client version 2.4.10.

  • MapR FS Reader and MapR FS Writer use Hadoop-client version 3.3.4.

  • MariaDB uses maria-binlog-connector-java-0.2.3-WA1.jar and mariadb-java-client-2.4.3.jar.

  • MariaDB Xpand uses mysql-binlog-connector-java-0.21.0.jar and mysql-connector-java-8.0.30.jar.

  • Mongo Cosmos DB Reader, MongoDB Reader, and MongoDB Writer use mongodb-driver-sync version 4.8.2.

  • MySQL uses mysql-binlog-connector-java-0.21.0.jar and mysql-connector-java version 8.0.27.

  • Oracle: the bundled Oracle JDBC driver is ojdbc-21.6.jar.

  • PostgreSQL: the bundled PostgreSQL JDBC 4.2 driver is postgresql-42.4.0.jar.

  • Redshift Writer uses aws-java-sdk-s3 1.11.320.

  • S3 Reader and S3 Writer use aws-java-sdk-s3 1.11.320.

  • Salesforce Reader always uses the latest version of the Force.com REST API.

  • Salesforce Writer: when Use Bulk Mode is True, uses Bulk API 2.0 Ingest; when Use Bulk Mode is False, uses the Force.com REST API version 53.1.0.

  • Snowflake Reader: the bundled Snowflake JDBC driver is snowflake-jdbc-3.18.0.jar.

  • Snowflake Writer: when Streaming Upload is False, uses snowflake-jdbc-3.18.0.jar; when Streaming Upload is True, uses snowflake-ingest-sdk version 2.2.2.

  • Spanner Writer uses the google-cloud-spanner client API version 1.28.0 and the bundled JDBC driver is google-cloud-spanner-jdbc version 1.1.0.

  • SQL Server: the bundled Microsoft SQL Server JDBC driver is mssql-jdbc-12.8.1.jre8.jar.

  • Yugabyte: uses the bundled PostgreSQL JDBC driver.