Building a pipeline with Kafka Writer
You can read from any Striim-supported source and write to Kafka using the Kafka Writer. In some cases, you may need to add an intermediate processing step to convert the output type to a format that is supported by Kafka writer.
Configuring the Kafka target involves the following steps:
Configure the Kafka Connection Profile with the respective properties.
Decide the authentication type that best suits your needs.
Decide if you want to encrypt your data.
If you also need client side authentication, Mutual TLS (mTLS) is available.
Understand the data distribution requirements.
Configure the “Topics” property with one or more topics.
Decide how you want to partition the data within the topic and set the “Partition key”. If the partitioning is based on Message Keys, then set “UseMessageKeyAsPartitionKey” to true.
Decide if the topics are going to be precreated or auto created by the target.
Message semantics requirements
By default, the writer has “E1P” set to true (which needs recovery to be turned ON), this needs an extra checkpointing topic (if it does not exist, the writer will create it)
You can choose to turn this OFF. While turned off, A1P will be the expected semantics.
Design your Kafka Message
Choose the Header, Keys (can be custom or primary keys) to be added in the Kafka message.
Choose the formatter of your choice
If you are moving data from OLTP or OLAP source, AvroFormatter can now give closer type mapping if the initial schema was moved along with the Initial Load before moving the CDC data
With Avroformatter, choose the format of the DML record (Default/Native/Table) as per your downstream application needs.
Choose your serializer - Striim or Confluent.
When using Schema Registry, pre-register your AvroSchema. If the schema is not found, AvroFormatter, pre-create the corresponding schema. See this section on external schema format for more details.
Decide how you want to handle the DDL from an OLTP source: Auto or Manual
Fine tune your performance by increasing the number of parallel threads if required. This will also have an impact on the amount of memory used by the target.
Before starting to build a pipeline with Kafka writer, here are some key considerations to keep in mind.