Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Current »

Apache Kafka is a distributed streaming platform that is highly scalable and secure, and it can:

  • Consume and publish messages/event streams, similar to an enterprise messaging system.
  • Store the messages for as long as you want.
  • Process the messages/event streams as they occur.

Creating Kafka Target

Follow the steps below to create Kafka target:

  1. Click Configure > TARGETS > Kafka Target.
  2. Click Create Kafka Target.
  3. In Create Kafka Target window, do the following:



    1. In the Name and Description fields, enter the name and description respectively for the Kafka target.
    2. In the Kafka Account field, select the Kafka account. 

    3. In the Topic(s) field, select the topic to be used. 

    4. In the Message Key field, enter the message key.
    5. Select the Enable Message Splitter check box to split the message by defining the below properties:
      1. In the Message Type field, select the message format.

        The supported message formats are JSON, XML, and PLAINTEXT.

      2. In the Message Splitter field, enter the delimiter based on which the messages will be splitted.
        It depends on the Message Type.

        • XML : It will be an element name of the record.
        • PLAINTEXT : It will be a delimiter.
        • JSON: It can be empty or an element name.
    6. In the Compression field, enter the Parameter that allows you to specify the compression codec for all data generated by this producer.
    7. In the Request Required Acks field, select the number of acknowledgments the producer requires the leader to have received before considering a request complete.

      This controls the durability of records that are sent. The following settings are common:

      • if set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1.

      • if set to 1 then this will mean the leader will write the record to its local log but will respond without awaiting full acknowledgement from all followers. In this case should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost.

      • if set to all then this means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee.

    8. In the Request Timeout field, enter the amount of time the broker will wait trying to meet the RequestRequiredAcks requirement before sending back an error to the client.

    9. In the Number of Retries field, enter the value.
      Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error. Note that this retry is no different than if the client resent the record upon receiving the error. Allowing retries will potentially change the ordering of records because if two records are sent to a single partition, and the first fails and is retried but the second succeeds, then the second record may appear first.

    10. In the Select Project field, select the project

    11. Click Save.

  • No labels