Setup Single Node Single/Multi Broker Configuration (Kafka and Zookeeper) : Kafka Operations Sample Commands

Install/Configure Zookeeper: Install and Verify Zookeeper environment and create data directory to store Zookeeper data.
  1. Download zookeeper and unzip it. Rename directory as zookeeper & place it at some location like (/usr/local/zookeeper)
  2. Set environment variables. The directory /usr/local/zookeeper will be referred as $ZK_HOME. Open bash profile & add the environment variables by running the below commands
    $ vi /etc/profile
    
    Add below two lines in profile page which make $ZK_HOME environment variable referring to /usr/local/zookeeper.
    export ZK_HOME=/usr/local/zookeeper
    export PATH=$ZK_HOME/bin:$PATH
  3. Create zookeeper directory(/var/lib/zookeeper) and provide access to <centos> user. Here centos is my privileged user for accessing all services.
    $ sudo mkdir /var/lib/zookeeper
    $ sudo chown -R centos:centos /var/lib/zookeeper
    
  4. Create a file $ZK_HOME/conf/zoo.cfg and add following lines in it. Refer sample file  "zoo_sample.cfg"
    tickTime=2000
    dataDir=/var/lib/zookeeper
    clientPort=2181
    initLimit=20
    syncLimit=5
    
  5. Verify zookeeper installation. Start Zookeeper by running the command below and verify that zookeeper is started.
    [centos@host01 conf]$ zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
 For more details related to zookeeper refer Zookeeper documentation.

Install/Configure Kafka : Install and Verify Kafka environment and create data directory to store Kafka data.
  1. Download Apache Kafka and unzip it. Rename directory as zookeeper & place it at some location like (/usr/local/kafka)
  2. Set environment variables. The directory /usr/local/zookeeper will be referred as $KAFKA_HOME. Open bash profile & add the environment variables by running the below commands
    $ vi /etc/profile
    
    Add below two lines in profile page which make $KAFKA_HOME environment variable referring to /usr/local/zookeeper.
    export KAFKA_HOME =/usr/local/kafka
    export PATH=$KAFKA_HOME/bin:$PATH
  3. Create kafka directory(/var/lib/kafka) and provide access to <centos> user. Here centos is my privileged user for accessing all services.
    $ sudo mkdir /var/lib/kafka
    $ sudo chown -R centos:centos /var/lib/kafka

Setup Single Broker Kafka (Single Node-Single Broker)

Below steps setup and configure Kafka as a Single Node-Single Broker.
  1. Go to $KAFKA_HOME/config directory and Make a copy of server.properties.
    [centos@host01 conf]$ cd $KAFKA_HOME/config
    [centos@host01 config]$ cp server.properties server-1.properties
    
  2. Open server-1.properties and make the following changes.
    Modify broker.id to 101, uncomment listeners entry and configure it to bind on localhost:9091 and modify log.dirs to /data/kafka-logs-1.
    broker.id=101
    listeners=PLAINTEXT://localhost:9091
    log.dirs=/var/lib/kafka/kafka-logs-1
  3. Go to $KAFKA_HOME and Start Kafka broker by running the commands below. Verify that Kafka server is started.
    [centos@host01 ~]$ cd $KAFKA_HOME
    [centos@host01 kafka]$ bin/kafka-server-start.sh config/server-1.properties
    
  4. Run jps command and verify that QuorumPeerMain and Kafka java processes are running.
    [centos@host01 ~]$ jps
    3072 QuorumPeerMain
    4116 Kafka
    4475 Jps

Basic Kafka Operations : (Single Node - Single Broker)

  1. Create topic: Execute below command to create a topic named as topic-devinline-1
    [centos@host01 ~]$ kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic topic-devinline-1
    Created topic "topic-devinline-1".
    
  2. List topics in broker: Execute below command to display all topics and validate that topic-devinline-1 exist.
    [centos@host01 ~]$ kafka-topics.sh --list --zookeeper localhost:2181
    topic-devinline-1
    
  3. Describe topic details: Using --describe 
    [centos@host01 ~]$ kafka-topics.sh --describe --zookeeper localhost:2181 --topic topic-devinline-1
    Topic:topic-devinline-1 PartitionCount:1 ReplicationFactor:1 Configs:
     Topic: topic-devinline-1 Partition: 0 Leader: 101 Replicas:
    
  4. Start Producer: Run below command to start Producer and produce messages
    [centos@host01 ~]$ kafka-console-producer.sh --broker-list localhost:9091 --topic topic-devinline-1
    >Hello,Message-1    
    >Hello,Message-2
    >Hello,Message-3
    >
    
  5. Start Consumer : Run below command to start consumer to consume messages from beginning produced by producer and Verify all the messages produced are consumed.
    [centos@host01 ~]$ kafka-console-consumer.sh --bootstrap-server localhost:9091 --topic  topic-devinline-1 --from-beginning
    Hello,Message-1
    Hello,Message-2
    Hello,Message-3
    
  6. Producer and Consumer in Sync: Produces message in producer terminal, at the same time it is consumed in consumer terminal.
    Producer - producing message and Consumer consuming in sync

Setup Multi Broker Kafka (Single Node - Multiple Broker)

Earlier we setup one topic in a broker (Single node). Add two more Kafka brokers to the existing configuration and make it Single Node – Multiple Brokers configuration. Execute following commands to setup Multiple Brokers configuration.
  1. Go to kafka/config directory and Make two copies of server.properties.
    [centos@host01 config]$ cp server-1.properties server-2.properties
    [centos@host01 config]$ cp server-1.properties server-3.properties
    
  2. Update server-2.properties with following details.
    broker.id=102
    listeners=PLAINTEXT://localhost:9092
    log.dirs=/var/lib/kafka/kafka-logs-2
    
  3. Update server-3.properties with following details.
    broker.id=103
    listeners=PLAINTEXT://localhost:9093
    log.dirs=/var/lib/kafka/kafka-logs-3
    
  4. Verify all three Kafka broker is running along with zookeeper.Verify that all three brokers are up and running
    [centos@host01 ~]$ jps
    3072 QuorumPeerMain
    8821 Kafka
    9514 Kafka
    8411 Kafka
    9867 Jps
    

Basic Kafka Operations : (Single Node - Multiple Broker)

Now we have three brokers up and running. We can perform Kafka broker operations in multi broker environment.
  1. Create Topic : Create a new topic with replication factor 3 as there are 3 brokers.
    [centos@host01 ~]$ kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 -partitions 1 --topic Multibroker-Devinline
    Created topic "Multibroker-Devinline".
    
  2. Describe topic details : Using switch --describe we can display topic details.    
    [centos@host01 ~]$ kafka-topics.sh --describe --zookeeper localhost:2181 --topic Multibroker-Devinline
    Topic:Multibroker-Devinline PartitionCount:1 ReplicationFactor:3 Configs:
     Topic: Multibroker-Devinline Partition: 0 Leader: 102 Replicas: 102,101,103 Isr: 102,101,103
    
  3. Start the producer and produce messages
    [centos@host01 ~]$ kafka-console-producer.sh --broker-list localhost:9093 --topic Multibroker-Devinline
    >Hello, Multibroker-1        
    >Hello, Multibroker-2
    >Hello, Multibroker-3
    >Hello, Multibroker-4
    
  4. Start the consumer and consume messages
    [centos@host01 ~]$ kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic Multibroker-Devinline --from-beginning
    Hello, Multibroker-1
    Hello, Multibroker-1
    Hello, Multibroker-2
    Hello, Multibroker-3
    Hello, Multibroker-4
    
  5. List the topics in the environment: --list switch is used to display all topics in environment. 
    [centos@host01 ~]$ kafka-topics.sh --list --zookeeper localhost:2181
    Multibroker-Devinline
    __consumer_offsets
    topic-devinline-1
    
  6. Modify topic config - Increase partition value from 1 to 2 for topic topic-devinline-1
    [centos@host01 ~]$ kafka-topics.sh --zookeeper localhost:2181 --alter --topic topic-devinline-1 --partitions 2
    WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
    Adding partitions succeeded!
    
  7. Delete topic : Using --delete topic can be deleted. By default, topic is not immediately deleted it is marked for deletion.
    [centos@host01 ~]$ kafka-topics.sh --zookeeper localhost:2181 --delete --topic topic-devinline-1
    Topic topic-devinline-1 is marked for deletion.
    Note: This will have no impact if delete.topic.enable is not set to true.
    

5 Comments

  1. Taldeen is one of the best plastic manufacturing company in Saudi Arabia. They are manufacturing Handling Solutions Plastic products like Plastic Pallets and plastic crates. Here is the link of the product
    Handling Solutions
    Plastic Pallets
    GrueBleen is one of the Branding and Marketing agency Based in Riyadh- Saudi Arabia. The main functions of GrueBleen is Advertising, Branding, Marketing, Office Branding, Exhibition Management and Digital Marketing. Visit the below link to know more about GrueBleen Creative Club.
    Branding Agency Riyadh
    Marketing Agency Riyadh
    Agriculture Solutions – Taldeen is a plastic manufacturing company in Saudi Arabia. They are manufacturing agricultural plastic products like greenhouse cover and hay cover. Visit the below link to know more details
    Agriculture Solutions
    Greenhouse Cover
    GrueBleen – One of the best social media marketing agency in Riyadh- Saudi Arabia. Visit here for the all service details of GrueBleen.
    Social Media Marketing Agency | Social Media Agency In Saudi Arabia | Social Media Agency In Riyadh | Social Media Agency in Jeddah |

    ReplyDelete
  2. Thank you for this wonderful post. It is very informative and useful. Ca for Tax Filing in Bangalore

    ReplyDelete
Previous Post Next Post