How to run Apache Kafka without Zookeeper
This article will help you to run Kafka with Zookeeper.
Apache Kafka has officially deprecated ZooKeeper in version 3.5.
Kafka version State
2.8 KRaft early access
3.3 KRaft production-ready
3.4 Migration scripts early access
3.5 ZooKeeper deprecated
4.0 ZooKeeper not supported
You can use Kafka Raft metadata mode(KRaft).
In KRaft mode each Kafka server can be configured as a controller, a broker, or both using the process.roles property. This property can have the following values:
- If process.roles is set to broker, the server acts as a broker.
- If process.roles is set to controller, the server acts as a controller.
- If process.roles is set to broker,controller, the server acts as both a broker and a controller.
- If process.roles is not set at all, it is assumed to be in ZooKeeper mode.
Let’s setting up:
First install java:
yum install java-21-openjdk // java 11 or above
Then Apache Kafka:
wget https://archive.apache.org/dist/kafka/3.6.0/kafka_2.13-3.6.0.tgz
tar -xzf kafka_2.13-3.6.0.tgz
Generate a unique cluster ID:
KAFKA_CLUSTER_ID="$(bin/kafka-storage.sh random-uuid)"
Output: ghagstgvgvgvgbb456789123
Default configuration available in config/kraft/server.properties.
领英推è
bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server.properties
Let’s start the Kafka broker.
bin/kafka-server-start.sh config/kraft/server.properties
[2024-01-23 11:08:53,894] INFO Kafka commitId: 70d335423f5f123a (org.apache.kafka.common.utils.AppInfoParser)
[2024-01-23 11:08:53,894] INFO Kafka startTimeMs:1706008133000(org.apache.kafka.common.utils.AppInfoParser)
[2024-01-23 11:08:53,895] INFO [KafkaRaftServer nodeId=1] Kafka Server started (kafka.server.KafkaRaftServer)
Multiple Node:
cp config/kraft/server.properties config/kraft/server1.properties
cp config/kraft/server.properties config/kraft/server2.properties
cp config/kraft/server.properties config/kraft/server3.properties
node.id=1
process.roles=broker,controller
inter.broker.listener.name=PLAINTEXT
controller.listener.names=CONTROLLER
listeners=PLAINTEXT://:9092,CONTROLLER://:19092
log.dirs=/tmp/server1/kraft-combined-logs
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
controller.quorum.voters=1@localhost:19092,2@localhost:19093,3@localhost:19094
node.id=2
process.roles=broker,controller
inter.broker.listener.name=PLAINTEXT
controller.listener.names=CONTROLLER
listeners=PLAINTEXT://:9093,CONTROLLER://:19093
log.dirs=/tmp/server2/kraft-combined-logs
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
controller.quorum.voters=1@localhost:19092,2@localhost:19093,3@localhost:19094
node.id=3
process.roles=broker,controller
inter.broker.listener.name=PLAINTEXT
controller.listener.names=CONTROLLER
listeners=PLAINTEXT://:9094,CONTROLLER://:19094
log.dirs=/tmp/server3/kraft-combined-logs
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
controller.quorum.voters=1@localhost:19092,2@localhost:19093,3@localhost:19094
Set up Log directories for Server1–2–3:
bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server1.properties
bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server2.properties
bin/kafka-storage.sh format -t $KAFKA_CLUSTER_ID -c config/kraft/server3.properties
Starting the Kafka Clusters:
./bin/kafka-server-start.sh ./config/kraft/server1.properties
./bin/kafka-server-start.sh ./config/kraft/server2.properties
./bin/kafka-server-start.sh ./config/kraft/server3.properties
You will have a basic Kafka environment running and ready to use. From there, you’ll set up your topics, write/read events into/from the topics.