Docker approaches to multiple environments
Managing multiple environments—development, staging, and production—is a cornerstone of modern software delivery. Docker simplifies this process by offering flexible, repeatable, and isolated environments. But how you structure your Docker setup across these environments can make or break your deployment strategy.
In this tutorial, we’ll explore the most effective Docker approaches to managing multiple environments, including:
- Multiple Dockerfiles
- Multi-stage builds
- Environment-specific compose files
- Runtime environment variables
- Volume and network strategies
- CI/CD integration tips
Why environment separation matters
Each environment serves a unique purpose:
| Environment | Purpose |
|---|---|
| Development | Fast iteration, debugging, hot reloading |
| Staging | Pre-production testing, QA, performance checks |
| Production | Secure, optimized, stable release |
Docker allows you to isolate dependencies, configurations, and runtime behaviors across these environments—without polluting your host machine or risking production stability.
Approach 1: Multiple dockerfiles
This is the simplest and most explicit method.
Structure
Dockerfile.dev
Dockerfile.staging
Dockerfile.prod
Each file contains environment-specific instructions. For example:
# Dockerfile.dev
FROM python:3.11
WORKDIR /app
COPY . .
RUN pip install -r requirements-dev.txt
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
# Dockerfile.prod
FROM python:3.11-slim
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["gunicorn", "myapp.wsgi:application", "--bind", "0.0.0.0:8000"]
Build and run
docker build -f Dockerfile.dev -t myapp-dev .
docker build -f Dockerfile.prod -t myapp-prod .
Pros
- Clear separation
- Easy to debug
Cons
- Code duplication
- Harder to maintain shared logic
Approach 2: Multi-stage builds
Multi-stage builds allow you to define multiple build stages in a single Dockerfile and selectively copy artifacts between them.
# Dockerfile
FROM node:18 AS dev
WORKDIR /app
COPY . .
RUN npm install
CMD ["npm", "run", "dev"]
FROM node:18 AS prod
WORKDIR /app
COPY . .
RUN npm ci --only=production
CMD ["npm", "start"]
You can build specific stages:
docker build --target dev -t myapp-dev .
docker build --target prod -t myapp-prod .
Pros
- Single source of truth
- DRY (Don’t Repeat Yourself)
Cons
- Slightly more complex syntax
- Requires Docker 17.05+
Approach 3: Environment-specific docker compose files
Docker Compose supports multiple override files. You can define a base docker-compose.yml and override it with environment-specific files.
Structure
docker-compose.yml
docker-compose.override.yml
docker-compose.prod.yml
docker-compose.staging.yml
Base Compose
version: '3.8'
services:
web:
build:
context: .
ports:
- "8000:8000"
env_file:
- .env
Production override
# docker-compose.prod.yml
services:
web:
build:
dockerfile: Dockerfile.prod
environment:
DJANGO_SETTINGS_MODULE: myapp.settings.production
restart: always
Run with overrides
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Pros
- Centralized config
- Easy overrides
- Compose-native
Cons
- Requires discipline in file management
Approach 4: Runtime environment variables
Use .env files or CI/CD secrets to inject environment-specific values.
Example .env.dev
DEBUG=True
DATABASE_URL=postgres://localhost/devdb
Example .env.prod
DEBUG=False
DATABASE_URL=postgres://prod-db:5432/proddb
Compose Integration
env_file:
- .env
Or use environment: directly for CI/CD pipelines.
Advanced Tips
Volume strategies
- Use bind mounts for development (
./src:/app) - Use named volumes for production (
app_data:/app)
Network isolation
networks:
dev_net:
driver: bridge
Attach services to different networks per environment.
🔹 CI/CD integration
Use environment-specific build and deploy steps:
# GitHub Actions example
jobs:
build:
steps:
- name: Build production image
run: docker build -f Dockerfile.prod -t myapp-prod .
Testing across environments
Use docker-compose -f to spin up test environments:
docker-compose -f docker-compose.yml -f docker-compose.test.yml up --abort-on-container-exit
You can also run integration tests inside containers using pytest, curl, or custom scripts.
Modular template strategy
For reusable deployments consider templating:
Dockerfile.base+Dockerfile.overridecompose.base.yml+compose.env.yml.env.template+.env.dev,.env.prod
This makes it easy to clone, customize, and deploy across projects.
Conclusion
Docker offers multiple flexible approaches to managing environments. Whether you prefer explicit separation via multiple Dockerfiles or lean into multi-stage builds and Compose overrides, the key is consistency and modularity.
For a workflow — especially with CI/CD pipelines, creative asset servers, and modular deployments — combining multi-stage builds with Compose overrides and .env injection is the most functional solution. It’s scalable, maintainable, and production-ready.
