Test Category

Test Blog Post

Starter template for writing out a blog post using MDX/JSX and Next.js.

No Name Exists

Abdullah Muhammad

Published on May 17, 20265 min read 1 views

Share:
Article Cover Image

Introduction

Docker has transformed the way developers build, ship, and run applications. By packaging applications into lightweight, portable containers, Docker eliminates the notorious "it works on my machine" problem and brings consistency across development, testing, and production environments.

What is Docker?

Docker is an open-source platform that automates the deployment of applications inside software containers. Think of containers as lightweight, standalone packages that include everything needed to run a piece of software:

  • Application code
  • Runtime environment
  • System tools and libraries
  • Configuration files

Unlike traditional virtual machines, containers share the host system's kernel, making them incredibly fast and resource-efficient.

Containers vs Virtual Machines

Understanding the difference between containers and VMs is crucial for appreciating Docker's benefits.

AspectContainersVirtual Machines
Boot TimeSecondsMinutes
SizeMegabytesGigabytes
OSShared kernelFull OS per VM
PerformanceNear-nativeOverhead from hypervisor
IsolationProcess-levelHardware-level
Resource UsageMinimalHeavy

Getting Started with Docker

Let's set up Docker and run your first container.


Installation

Docker Desktop is the easiest way to get started. Download it from the official Docker website for your operating system:

# Verify installation
docker --version
# Docker version 24.0.7, build afdd53b

# Check Docker is running
docker info

Your First Container

Run a simple container to verify everything works:

# Pull and run the hello-world image
docker run hello-world

# Run an interactive Ubuntu container
docker run -it ubuntu bash

# Run Nginx web server
docker run -d -p 8080:80 nginx

The -d flag runs the container in detached mode, while -p 8080:80 maps port 8080 on your host to port 80 in the container.


Understanding Docker Images

A Docker image is a read-only template containing instructions for creating a container. Images are built in layers, with each layer representing a set of filesystem changes.


Working with Images

# List local images
docker images

# Pull an image from Docker Hub
docker pull node:20-alpine

# Remove an image
docker rmi node:20-alpine

# Search for images
docker search postgres

Image Tags

Tags identify specific versions of an image:

# Pull specific versions
docker pull python:3.12
docker pull python:3.12-slim
docker pull python:3.12-alpine

# Latest tag (default, but not recommended for production)
docker pull python:latest

Always use specific tags in production to ensure reproducibility.


Writing Dockerfiles

A Dockerfile is a text document containing instructions to build a Docker image. Let's create one for a Node.js application.


Basic Dockerfile

# Use an official Node.js runtime as the base image
FROM node:20-alpine

# Set the working directory inside the container
WORKDIR /app

# Copy package files first (for better caching)
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy application source code
COPY . .

# Expose the port the app runs on
EXPOSE 3000

# Define the command to run the application
CMD ["node", "server.js"]

Building and Running

# Build the image
docker build -t my-node-app .

# Run a container from the image
docker run -d -p 3000:3000 --name my-app my-node-app

# View running containers
docker ps

# View container logs
docker logs my-app

# Stop the container
docker stop my-app

Dockerfile Best Practices

Writing efficient Dockerfiles is an art. Here are key practices to follow:


1. Use Multi-Stage Builds

Multi-stage builds dramatically reduce final image size:

# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Production stage
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "dist/server.js"]

2. Leverage Layer Caching

Order instructions from least to most frequently changing:

FROM node:20-alpine
WORKDIR /app

# Dependencies change less often
COPY package*.json ./
RUN npm ci

# Source code changes frequently
COPY . .

CMD ["npm", "start"]

3. Use .dockerignore

Create a .dockerignore file to exclude unnecessary files:

node_modules
npm-debug.log
.git
.gitignore
README.md
.env
.env.*
coverage
.nyc_output

Container Management

Essential commands for managing Docker containers:


Lifecycle Commands

# Create and start a container
docker run -d --name mycontainer nginx

# Stop a running container
docker stop mycontainer

# Start a stopped container
docker start mycontainer

# Restart a container
docker restart mycontainer

# Remove a container
docker rm mycontainer

# Remove a running container forcefully
docker rm -f mycontainer

Inspection Commands

# List running containers
docker ps

# List all containers (including stopped)
docker ps -a

# View container logs
docker logs -f mycontainer

# Execute command inside running container
docker exec -it mycontainer bash

# Inspect container details
docker inspect mycontainer

# View container resource usage
docker stats

Docker Compose

Docker Compose allows you to define and run multi-container applications using a YAML file.


Sample docker-compose.yml

version: '3.8'

services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=production
      - DATABASE_URL=postgres://user:pass@db:5432/mydb
    depends_on:
      - db
      - redis
    volumes:
      - ./uploads:/app/uploads

  db:
    image: postgres:15-alpine
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: mydb
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  postgres_data:

Compose Commands

# Start all services
docker compose up -d

# View running services
docker compose ps

# View logs from all services
docker compose logs -f

# Stop all services
docker compose down

# Stop and remove volumes
docker compose down -v

# Rebuild images
docker compose build

# Scale a service
docker compose up -d --scale app=3

Networking in Docker

Docker provides several networking options for container communication.


Network Types

  1. Bridge - Default network for standalone containers
  2. Host - Container uses host's network directly
  3. None - Container has no network access
  4. Overlay - For multi-host networking in Swarm mode

Working with Networks

# List networks
docker network ls

# Create a custom network
docker network create my-network

# Run container on specific network
docker run -d --network my-network --name api my-api-image

# Connect existing container to network
docker network connect my-network existing-container

# Inspect network
docker network inspect my-network

Volumes and Data Persistence

Containers are ephemeral by default. Use volumes to persist data.


Volume Types

# Named volume (recommended)
docker run -d -v mydata:/app/data myimage

# Bind mount (maps host directory)
docker run -d -v /host/path:/container/path myimage

# Anonymous volume
docker run -d -v /app/data myimage

Managing Volumes

# List volumes
docker volume ls

# Create a volume
docker volume create myvolume

# Inspect a volume
docker volume inspect myvolume

# Remove unused volumes
docker volume prune

# Remove specific volume
docker volume rm myvolume

Security Best Practices

Security should be a top priority when working with Docker.


Key Recommendations

  1. Never run as root - Use a non-root user in your Dockerfile:
FROM node:20-alpine
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
WORKDIR /app
# ... rest of Dockerfile
  1. Scan images for vulnerabilities:
docker scout quickview myimage
docker scout cves myimage
  1. Use minimal base images - Alpine-based images are smaller and have fewer vulnerabilities

  2. Don't store secrets in images - Use environment variables or secret management tools

  3. Keep images updated - Regularly rebuild with latest base images


Common Use Cases

Docker shines in numerous scenarios:


Development Environments

Ensure every developer has an identical setup:

# docker-compose.dev.yml
services:
  app:
    build:
      context: .
      dockerfile: Dockerfile.dev
    volumes:
      - .:/app
      - /app/node_modules
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=development

CI/CD Pipelines

Build, test, and deploy consistently across environments:

# Example GitHub Actions workflow
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Build Docker image
        run: docker build -t myapp:${{ github.sha }} .
      - name: Run tests
        run: docker run myapp:${{ github.sha }} npm test
      - name: Push to registry
        run: docker push myapp:${{ github.sha }}

Microservices Architecture

Deploy independent services that communicate via networks:

services:
  auth-service:
    build: ./services/auth
    networks:
      - backend

  user-service:
    build: ./services/user
    networks:
      - backend

  api-gateway:
    build: ./gateway
    ports:
      - "80:80"
    networks:
      - backend
      - frontend

networks:
  backend:
  frontend:

Troubleshooting Tips

When things go wrong, these commands help diagnose issues:

# View container logs
docker logs --tail 100 container_name

# Access container shell
docker exec -it container_name sh

# Check container processes
docker top container_name

# View real-time events
docker events

# Check disk usage
docker system df

# Clean up unused resources
docker system prune -a

Conclusion

Docker has become an essential tool in modern software development. By containerizing your applications, you gain portability, consistency, and scalability that were previously difficult to achieve.

Start small by containerizing a simple application, then gradually adopt Docker Compose for multi-container setups. As your infrastructure grows, explore orchestration tools like Kubernetes for production-grade container management.

Ready to containerize your next project? Check out our blog for more DevOps tutorials, or explore our pricing page to see how SaaS-kit can accelerate your development workflow.

No Name

Abdullah Muhammad

Blogger. Software Engineer. Designer.

Subscribe to the newsletter

Get new articles, code samples, and project updates delivered straight to your inbox.