How to run Apache Kafka without Zookeeper


This article will help you to run Kafka with Zookeeper.

Apache Kafka has officially deprecated ZooKeeper in version 3.5.

Kafka version                              State

2.8                                        KRaft early access

3.3                                        KRaft production-ready

3.4                                        Migration scripts early access

3.5                                        ZooKeeper deprecated

4.0                                        ZooKeeper not supported        

You can use Kafka Raft metadata mode(KRaft).

In KRaft mode each Kafka server can be configured as a controller, a broker, or both using the process.roles property. This property can have the following values:

  • If process.roles is set to broker, the server acts as a broker.
  • If process.roles is set to controller, the server acts as a controller.
  • If process.roles is set to broker,controller, the server acts as both a broker and a controller.
  • If process.roles is not set at all, it is assumed to be in ZooKeeper mode.

Let’s setting up:

First install java:

 yum install java-21-openjdk  // java 11 or above        

Then Apache Kafka:

 wget https://archive.apache.org/dist/kafka/3.6.0/kafka_2.13-3.6.0.tgz
 tar -xzf kafka_2.13-3.6.0.tgz        

Generate a unique cluster ID:

KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"

Output:  ghagstgvgvgvgbb456789123        

Default configuration available in config/kraft/server.properties.

bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server.properties        

Let’s start the Kafka broker.

bin/kafka-server-start.sh config/kraft/server.properties

[2024-01-23 11:08:53,894] INFO Kafka commitId: 70d335423f5f123a (org.apache.kafka.common.utils.AppInfoParser)
[2024-01-23 11:08:53,894] INFO Kafka startTimeMs:1706008133000(org.apache.kafka.common.utils.AppInfoParser)
[2024-01-23 11:08:53,895] INFO [KafkaRaftServer nodeId=1] Kafka Server started (kafka.server.KafkaRaftServer)        

Multiple Node:

cp config/kraft/server.properties config/kraft/server1.properties
cp config/kraft/server.properties config/kraft/server2.properties
cp config/kraft/server.properties config/kraft/server3.properties        

server1.properties

node.id=1

process.roles=broker,controller

inter.broker.listener.name=PLAINTEXT

controller.listener.names=CONTROLLER

listeners=PLAINTEXT://:9092,CONTROLLER://:19092

log.dirs=/tmp/server1/kraft-combined-logs

listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

controller.quorum.voters=1@localhost:19092,2@localhost:19093,3@localhost:19094        

server2.properties

node.id=2

process.roles=broker,controller

inter.broker.listener.name=PLAINTEXT

controller.listener.names=CONTROLLER

listeners=PLAINTEXT://:9093,CONTROLLER://:19093

log.dirs=/tmp/server2/kraft-combined-logs

listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

controller.quorum.voters=1@localhost:19092,2@localhost:19093,3@localhost:19094        

server3.properties

node.id=3

process.roles=broker,controller

inter.broker.listener.name=PLAINTEXT

controller.listener.names=CONTROLLER

listeners=PLAINTEXT://:9094,CONTROLLER://:19094

log.dirs=/tmp/server3/kraft-combined-logs

listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

controller.quorum.voters=1@localhost:19092,2@localhost:19093,3@localhost:19094        

Set up Log directories for Server1–2–3:

bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server1.properties
bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server2.properties
bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server3.properties        

Starting the Kafka Clusters:

./bin/kafka-server-start.sh ./config/kraft/server1.properties
./bin/kafka-server-start.sh ./config/kraft/server2.properties
./bin/kafka-server-start.sh ./config/kraft/server3.properties        

You will have a basic Kafka environment running and ready to use. From there, you’ll set up your topics, write/read events into/from the topics.


要查看或添加评论,请登录

Amit Nijhawan的更多文章

  • WarpStream Introduction and Deployment

    WarpStream Introduction and Deployment

    Introduction Welcome to the world of WarpStream, a cutting-edge data streaming platform designed to seamlessly…

    1 条评论
  • How to Configure Salesforce Platform Events Source Connector

    How to Configure Salesforce Platform Events Source Connector

    In this article, I will show you how to send data from Salesforce Platform Events to Kafka topic by setting up a…

  • Kafka Topic Replication

    Kafka Topic Replication

    Kafka topic replication is a technique which helps kafka to ensure high availability of a topic ie; even if the node…

  • Choosing the number of partitions for a topic

    Choosing the number of partitions for a topic

    Learn how to determine the number of partitions each of your Kafka topics requires. Choosing the proper number of…

  • Kafka and Kafka Connect

    Kafka and Kafka Connect

    In this guide, you can learn the following foundational information about Apache Kafka and Kafka Connect: What Apache…

    2 条评论
  • What happens when you enter Google/any URL on the browser?

    What happens when you enter Google/any URL on the browser?

    ??DNS Resolution When you type Google(.)com into your browser, the first step is to resolve the domain name to an IP…

  • Apache Kafka: Core Concepts and Use Cases

    Apache Kafka: Core Concepts and Use Cases

    These days, considering data as streams is a well-known methodology. By and large, it allows the structuring of data…

  • Developer Blog

    Developer Blog

社区洞察

其他会员也浏览了