# Manual Checks
# Apache Zookeeper
As the cluster is now ready, we will make sure that Apache Zookeeper works fine. Connect to the Apache Zookeeper node:
For example
docker-compose -f kafka-cluster.yml exec zk-1 bash
Once connected, use the zooKeeper-shell
command to check the cluster's state:
zookeeper-shell 127.0.0.1:2181
Check out the root nodes with the following command:
ls /
[admin, brokers, cluster, config, consumers, controller, controller_epoch, isr_change_notification, latest_producer_id_block, log_dir_event_notification, schema_registry, zookeeper]
Now let's make sure that the node is well registered:
ls /brokers/ids # Gives the list of active brokers
ls /brokers/topics #Gives the list of topics
get /brokers/ids/0 #Gives more detailed information of the broker id '0'
# ZooKeeper Commands: The Four Letter Words
ZooKeeper responds to a small set of commands. Each command is composed of four letters. You can issue the commands to ZooKeeper via telnet or nc, at the client port.* Apache Zookeeper documentation
To be able to use the commands, add the following option when starting the application
KAFKA_OPTS: "-Dzookeeper.4lw.commands.whitelist=*"
Check if server is running in a non-error state. The server will respond with imok
if it is running. Otherwise it will not respond at all.
Here's an example of the ruok command:
echo ruok | nc 127.0.0.1 2181
imok
Find all commands in the documentation Apache Zookeeper Apache Zookeeper documentation
TIP
A work is in progress to remove the Apache ZooKeeper Dependency. You can follow it with this KIP-500 and read more about it in this blog post.
# Apache Kafka
# Logs
Like any other application, it's possible to browse logs to verify that the broker is running correctly.
Confluent Docker images are configured to send all logs to stdout
by default. The configuration is written in the /etc/kafka/log4j.properties
file.
To browse these logs:
docker-compose -f kafka-cluster.yml logs kafka-1
TIP
Add -f
or --follow
to continue streaming the logs
With a classic installation, the broker will write its logs in several files inside /var/log/kafka
:
The logs from the server go to logs/server.log.
The controller is responsible for cluster management and handles events like broker failures, leader election, topic deletion and more.
The controller does state management for all resources in the Kafka cluster. This includes topics, partitions, brokers and replicas. As part of state management, when the state of any resource is changed by the controller, it logs the action to a special state change log stored under logs/state-change.log.
log-cleaner.log
contains logs from LogCleaner and topic compaction activities.
Here's default log4j.properties
.