Docker Compose Kafka Setup (Confluent Cloud)

In this tutorial it is going to be explained how to set up a Docker Compose Kafka cluster for local development.

We are going to bind the containers port to the host port so that Kafka is available for other compose stacks running on a different docker network. Four containers will be defined in the compose file, those are zookeeper, kafka, control-center and one kafka-topic-generator.

The kafka-topic-generator is used only to create topics and do Kafka maintenance tasks on initialization, it does not stay running after the initial script execution.

The control-center is a web app that can be used to manage the Kafka cluster through a UI instead of using the command line. Through the control-center you can easily create new topics and watch events on the fly.


Following is the content of the docker-compose.yaml file we are going to use to create the stack:

Coordination Cluster with Zookeeper

The zookeeper configuration is pretty straight forward and the only thing that is needed is to be set is the ZOOKEEPER_CLIENT_PORT environment variable. This is the port in which the zookeeper will listen for Kafka connections.

Docker Compose Kafka Configuration

The kafka container depends on the zookeeper, so its initialization starts after zookeeper is up and running. There are a few environment variables that need to be set so that it accepts requests from outside the network (for example a node application running on the host machine).

  • KAFKA_BROKER_ID – Id number of the broker, if not set a default one will be generated
  • KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR – Required when you have less than 3 brokers
  • KAFKA_ZOOKEEPER_CONNECT – Url to the zookeeper container
  • KAFKA_INTER_BROKER_LISTENER_NAME – Sets the name of the connection for internal communication
  • KAFKA_LISTENERS – Sets the ports where the server will receive connections for both OUTSIDE and INTERNAL listeners
  • KAFKA_ADVERTISED_LISTENERS – Sets the connection addresses that will be used by the clients
  • KAFKA_LISTENER_SECURITY_PROTOCOL_MAP – Sets the type of encryption used for both OUTSIDE and INTERNAL connections

For this container, we have two mount points specified to store the kafka data and configuration at a local folder.

Visualizing Flow of Data Using Control Center

The control-center will be running on port 9021. Its configuration is pretty simple, only requiring to set three environment variables.

  • CONTROL_CENTER_BOOTSTRAP_SERVERS – Address of the Kafka cluster
  • CONTROL_CENTER_REPLICATION_FACTOR – Replication factor for the topics created by the control-center
  • PORT – Port used by the control-center webapp

Create Stack

We are going to create the stack in detached mode and then start tailing the logs to see if it is working fine.

docker-compose up -d
docker-compose logs -f

Control Center UI

Please have a look at the control-center by entering http://localhost:9021

ControlCenter Home screen

Click on the listed cluster to get it’s overview information.

Cluster overview
Your setup should be ready to be used, please explore the left menu options to learn how to use the Control Center.


In this tutorial, we covered a basic Kafka setup for local development using docker, docker-compose, and confluent cloud images. As a disclaimer, I should say that this configuration should never be used in a production environment.

If you liked this post, please leave a comment and share with your friends.


By Thiago Lima


I am Thiago Lima, a Brazilian programmer who lives in California. The goal of this channel is to share my journey as a software engineer who left my own country to chase a dream. Here we will talk about technology itself, but also about how to migrate from our own country to become a programmer in US.

3 replies on “Docker Compose Kafka Setup (Confluent Cloud)”

Leave a Reply

Your email address will not be published. Required fields are marked *