Skip to content
Home » How to Use Docker Compose for Multi-Container Applications

How to Use Docker Compose for Multi-Container Applications

  • by

Docker Compose is a powerful tool for managing multi-container Docker applications. It allows you to define and run multiple containers in a single configuration file, making it easier to orchestrate complex systems composed of different services, such as databases, web servers, and application backends. With Docker Compose, you can spin up an entire environment with a single command, reducing the need to manually configure each container.
In this blog post, we’ll explore how to use Docker Compose for multi-container applications. By the end, you’ll understand how to define services, link them together, and deploy them with ease.
What is Docker Compose?
Docker Compose is a tool that allows you to define a multi-container environment using a YAML file (docker-compose.yml). This file contains all the necessary configurations for each container, including images, volumes, networks, and environment variables. Docker Compose makes it simple to define, manage, and run complex applications that consist of multiple services.
Why Use Docker Compose?
When developing modern applications, it’s common to use several components that need to interact with each other, such as:
• Databases: MySQL, PostgreSQL, MongoDB, etc.
• Back-end services: APIs, microservices, etc.
• Front-end applications: React, Angular, or any web server.
• Caching services: Redis, Memcached.
With Docker Compose, you can define all these services in a single configuration file and start them with just one command. This is much more efficient than managing individual containers manually.
Key Concepts in Docker Compose
Before we dive into an example, let’s cover some of the core concepts of Docker Compose:

  1. Services: These are the containers that make up your application. Each service runs a specific container, such as a database, web server, or API.
  2. Networks: Docker Compose automatically creates a default network, allowing the services to communicate with each other. You can also define custom networks if needed.
  3. Volumes: Volumes are used to persist data across container restarts. For example, databases need persistent storage, which is handled by volumes in Docker Compose.
  4. Build: Instead of pulling pre-built images from Docker Hub, you can define how to build the Docker images for each service.
    Step 1: Install Docker Compose
    Before you start using Docker Compose, ensure it’s installed on your system. If you already have Docker installed, Docker Compose is usually included, but you can verify the installation with the following command:
    docker-compose –version
    If it’s not installed, follow the official Docker Compose installation guide.
    Step 2: Create a Multi-Container Application
    Let’s walk through an example where we will create a simple web application using Python’s Flask and connect it to a PostgreSQL database. We’ll define this multi-container application using Docker Compose.
  5. Project Structure
    Create a project folder and the following structure:
    my-multi-container-app/

    ├── app/
    │ └── app.py
    ├── docker-compose.yml
    └── Dockerfile
    • app.py: The Python application that connects to the PostgreSQL database.
    • Dockerfile: The Dockerfile to build the Flask application container.
    • docker-compose.yml: The configuration file to define both the Flask app and the PostgreSQL database.
  6. Create the Flask Application
    In the app/ directory, create a Python file called app.py with the following content:
    from flask import Flask
    import psycopg2

app = Flask(name)

@app.route(‘/’)
def hello():
try:
# Connect to the PostgreSQL database
conn = psycopg2.connect(
dbname=”mydb”,
user=”myuser”,
password=”mypassword”,
host=”db”, # The service name defined in docker-compose.yml
port=”5432″
)
return “Connected to PostgreSQL!”
except Exception as e:
return f”Failed to connect to the database: {e}”

if name == ‘main‘:
app.run(debug=True, host=’0.0.0.0′, port=5000)
This Flask application connects to a PostgreSQL database using the psycopg2 library.

  1. Create the Dockerfile
    In the root directory, create a file called Dockerfile for the Flask app:

Use an official Python image

FROM python:3.9-slim

Set the working directory inside the container

WORKDIR /app

Copy the current directory contents into the container

COPY ./app /app

Install required Python packages

RUN pip install –no-cache-dir -r requirements.txt

Expose the port the app will run on

EXPOSE 5000

Run the Flask app

CMD [“python”, “app.py”]
This Dockerfile creates a container for the Flask application.

  1. Create requirements.txt
    In the app/ folder, create a requirements.txt file with the following content:
    Flask==2.1.1
    psycopg2-binary==2.9.3
    This lists the dependencies required by the Flask app.
  2. Create the Docker Compose Configuration
    Now, let’s define the multi-container environment in the docker-compose.yml file. This file will contain the configurations for both the Flask application and the PostgreSQL database:
    version: ‘3.8’

services:
web:
build: .
container_name: flask-app
ports:
– “5000:5000”
depends_on:
– db
environment:
– FLASK_ENV=development

db:
image: postgres:13
container_name: postgres-db
environment:
POSTGRES_DB: mydb
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
volumes:
– postgres_data:/var/lib/postgresql/data
ports:
– “5432:5432”

volumes:
postgres_data:
Here’s what each section does:
• web service: This service uses the Flask app Dockerfile to build the container. It exposes port 5000, where the Flask app will be accessible. It also depends on the db service, meaning Docker will wait for the database to start before running the Flask app.
• db service: This service runs a PostgreSQL database container. It defines the database name, user, and password via environment variables. It also mounts a volume (postgres_data) to persist the database data across container restarts.
Step 3: Build and Run the Application

  1. Build the application: In the project root, run the following command to build both the Flask and PostgreSQL containers:
  2. docker-compose build
  3. Start the containers: After the build process is complete, start both containers by running:
  4. docker-compose up
    Docker Compose will automatically create the necessary containers and networks and start the application. It will also output logs to the console, so you can monitor the containers as they start.
  5. Access the application: Once the containers are running, open a browser and visit http://localhost:5000. If everything is set up correctly, you should see the message:
  6. Connected to PostgreSQL!
    Step 4: Clean Up
    When you’re done, you can stop and remove the containers, networks, and volumes with the following command:
    docker-compose down
    This will stop the containers and remove them, along with the default network. The database volume is preserved by default unless you specify otherwise.
    Conclusion
    Docker Compose simplifies the process of working with multi-container applications by allowing you to define multiple services in a single YAML file. It manages the complexities of linking containers, handling dependencies, and ensuring that your application works seamlessly across different environments.
    By following the steps outlined in this post, you can quickly set up a multi-container application, like the Flask and PostgreSQL example, and deploy it with minimal effort. With Docker Compose, managing your containers has never been easier, and it’s a great tool for building scalable and maintainable applications. Happy Dockerizing!

Leave a Reply

Your email address will not be published. Required fields are marked *

For AI, Search, Content Management & Data Engineering Services

Get in touch with us