FastAPI Event Driven Development With Kafka, Zookeeper and Docker Compose: A Step-by-Step Guide Part -1

Ahmed Nafies
6 min readApr 18, 2023

Setting up Kafka and Docker compose

Full Code on github here


In part 1, we will set up a single-node Kafka and Zookeeper environment using Docker Compose. We will then produce and consume test messages using the Kafka console producer and consumer. In part 2, we will add a FastAPI endpoint that handles requests and produces messages.


Kafka is a distributed data streaming platform that allows you to publish, store, and process streams of records (messages) in a fault-tolerant and scalable manner. In simple terms, Kafka is like a messaging system that enables applications to send and receive data in real-time.

Kafka has three main components:

  1. Producers: Applications that send (publish) messages (also called records) to Kafka.
  2. Brokers: The Kafka servers that store and manage the messages. They work together to form a Kafka cluster, ensuring fault tolerance and scalability.
  3. Consumers: Applications that read (consume) messages from Kafka.

Kafka organizes messages into categories called topics. A producer sends messages to a specific topic, and a consumer subscribes to one or more topics to read the messages. Topics are divided into partitions, which allow for parallelism and help increase the throughput of your Kafka cluster.

Kafka is useful for various use cases, such as:

  • Real-time data processing: Analyzing and processing data streams in real-time.
  • Log aggregation: Collecting logs from multiple sources and centralizing them for further analysis or monitoring.
  • Event sourcing: Storing and processing a sequence of events to derive the current state of a system.
  • Messaging: Enabling communication between different applications or micro-services in a distributed system.