If your Docker images are bloated and slow to deploy, multi-stage builds are the single most impactful optimization you can make. By separating your build environment from your runtime environment, you can reduce image sizes from gigabytes to mere megabytes — often achieving a 90% or greater reduction. In this guide, we’ll walk through practical multi-stage build patterns for real-world applications in 2026.
Why Image Size Matters More Than Ever
In the era of edge computing, serverless containers, and rapid auto-scaling, every megabyte counts. Larger images mean:
- Slower cold starts in Kubernetes and serverless platforms
- Higher storage and bandwidth costs in container registries
- Larger attack surface with unnecessary packages and tools
- Slower CI/CD pipelines waiting on image pushes and pulls
Multi-stage builds solve all of these problems by letting you use full-featured build tools during compilation while shipping only the bare minimum at runtime.
The Basics: A Simple Multi-Stage Dockerfile
Let’s start with a Go application. Without multi-stage builds, you might write:
# ❌ Single-stage: ~1.1 GB
FROM golang:1.23
WORKDIR /app
COPY . .
RUN go build -o server .
CMD ["./server"]The resulting image includes the entire Go toolchain, source code, and build cache. Here’s the multi-stage version:
# ✅ Multi-stage: ~12 MB
FROM golang:1.23 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags='-s -w' -o server .
FROM scratch
COPY --from=builder /app/server /server
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
EXPOSE 8080
CMD ["/server"]The final image contains only the compiled binary and SSL certificates — nothing else. That’s a reduction from ~1.1 GB to ~12 MB.
Multi-Stage Builds for Node.js Applications
Node.js apps benefit enormously from multi-stage builds, especially when you separate dependency installation from the runtime:
# Stage 1: Install ALL dependencies (including devDependencies)
FROM node:22-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
# Stage 2: Build the application
FROM deps AS builder
COPY . .
RUN npm run build
# Stage 3: Production runtime
FROM node:22-alpine AS runtime
WORKDIR /app
ENV NODE_ENV=production
COPY package.json package-lock.json ./
RUN npm ci --omit=dev && npm cache clean --force
COPY --from=builder /app/dist ./dist
EXPOSE 3000
USER node
CMD ["node", "dist/index.js"]Key optimizations here:
- Three stages: dependencies → build → runtime, each with a clear purpose
- Production-only deps: The final stage runs
npm ci --omit=dev, excluding test frameworks, linters, and build tools - Alpine base: Using
node:22-alpineinstead of the full Debian-based image saves ~800 MB - Non-root user: Running as
nodeinstead of root for security
Advanced Pattern: Build Caching with Mounted Caches
Docker BuildKit (now the default engine) supports cache mounts that persist across builds, dramatically speeding up repeated builds:
# syntax=docker/dockerfile:1
FROM rust:1.82 AS builder
WORKDIR /app
COPY . .
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/app/target \
cargo build --release && \
cp target/release/myapp /usr/local/bin/myapp
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates && rm -rf /var/lib/apt/lists/*
COPY --from=builder /usr/local/bin/myapp /usr/local/bin/myapp
CMD ["myapp"]The --mount=type=cache directive keeps the Cargo registry and build artifacts cached between builds, so incremental builds are nearly instant.
Advanced Pattern: Parallel Build Stages
BuildKit can execute independent stages in parallel. This is perfect for monorepos or apps with separate frontend and backend builds:
# These two stages run in PARALLEL
FROM node:22-alpine AS frontend
WORKDIR /app/frontend
COPY frontend/package.json frontend/package-lock.json ./
RUN npm ci
COPY frontend/ .
RUN npm run build
FROM python:3.13-slim AS backend
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY backend/ ./backend/
# Final stage combines both
FROM python:3.13-slim
WORKDIR /app
COPY --from=backend /usr/local/lib/python3.13/site-packages /usr/local/lib/python3.13/site-packages
COPY --from=backend /app/backend ./backend
COPY --from=frontend /app/frontend/dist ./static
CMD ["python", "backend/main.py"]Since frontend and backend stages don’t depend on each other, BuildKit builds them simultaneously, cutting total build time significantly.
Security Best Practices for Multi-Stage Builds
Multi-stage builds are also a security win. Here are key practices:
- Never copy secrets into the final stage. Use BuildKit secret mounts instead:
RUN --mount=type=secret,id=npmrc,target=/app/.npmrc npm ci - Use distroless or scratch base images for the final stage when possible — no shell means attackers have fewer tools to exploit
- Pin image digests in production for reproducibility:
FROM node:22-alpine@sha256:abc123... AS builder - Scan only the final image. Build stages can have vulnerabilities — that’s fine since they’re never deployed
Measuring Your Results
Always verify your optimization with docker images and dive:
# Check image size
docker images myapp
# Analyze layers with dive
dive myapp:latest
# Check for unnecessary files
docker run --rm myapp:latest find / -type f | wc -lA well-optimized multi-stage build should have minimal layers, no build tools, no source code, and no package manager caches in the final image.
Quick Reference: Base Image Sizes
Choosing the right final-stage base image matters:
- scratch — 0 MB (for static binaries only)
- distroless/static — ~2 MB (static binaries + SSL certs)
- alpine — ~7 MB (minimal Linux with musl libc)
- debian-slim — ~75 MB (when you need glibc)
- ubuntu — ~78 MB (familiar but heavier)
Conclusion
Multi-stage builds are no longer optional — they’re a fundamental DevOps practice. By separating build concerns from runtime concerns, you get smaller images, faster deployments, better security, and lower costs. Start with the patterns above and adapt them to your stack. Your CI/CD pipeline (and your cloud bill) will thank you.

Leave a Reply