Why Docker Matters for Web Development
Every web developer has experienced the frustration of an application that works perfectly on their machine but breaks in staging or production. Different PHP versions, missing extensions, incompatible Node.js versions, database configuration differences — the list of things that can go wrong between environments is long and painful.
Docker solves this problem by packaging your application and all its dependencies into a container — a lightweight, isolated environment that runs identically on every machine. At StrikingWeb, we adopted Docker across all our projects in early 2019, and it has fundamentally improved our development workflow, onboarding speed, and deployment reliability.
Understanding the Core Concepts
Images vs Containers
A Docker image is a blueprint — a read-only template containing your application code, runtime, system tools, libraries, and settings. An image is like a class in object-oriented programming. A container is a running instance of an image, like an object instantiated from that class. You can run multiple containers from the same image, each isolated from the others.
Dockerfiles
A Dockerfile is a text file containing instructions for building a Docker image. Each instruction creates a layer in the image, and Docker caches these layers to speed up subsequent builds. Here is a Dockerfile for a typical Node.js web application:
# Use an official Node.js runtime as the base image
FROM node:12-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy package files first (for better caching)
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy the rest of the application
COPY . .
# Expose the port the app runs on
EXPOSE 3000
# Define the command to run the application
CMD ["node", "server.js"]
The order of instructions matters for build performance. By copying package.json before the rest of the application code, Docker can cache the npm install step and only re-run it when dependencies change, not when you edit your application code.
Volumes
Volumes are Docker's mechanism for persisting data and sharing files between the host machine and containers. In development, you typically mount your source code as a volume so that changes you make on your host are immediately reflected inside the container without rebuilding the image.
Docker Compose for Multi-Container Applications
Most web applications involve more than just application code. You need a database, possibly a cache layer like Redis, and perhaps a reverse proxy. Docker Compose allows you to define and manage multi-container applications using a simple YAML file.
A Laravel Development Environment
Here is a docker-compose.yml file we use for Laravel projects at StrikingWeb:
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/var/www/html
ports:
- "8000:8000"
depends_on:
- mysql
- redis
environment:
DB_HOST: mysql
REDIS_HOST: redis
mysql:
image: mysql:8.0
environment:
MYSQL_DATABASE: laravel
MYSQL_ROOT_PASSWORD: secret
volumes:
- mysql_data:/var/lib/mysql
ports:
- "3306:3306"
redis:
image: redis:6-alpine
ports:
- "6379:6379"
volumes:
mysql_data:
With this configuration, a new developer joining the project can get the entire stack running with a single command: docker-compose up. No need to install PHP, MySQL, or Redis on their machine. No need to configure database users or set up connection strings. Everything is defined in code and version-controlled alongside the application.
Essential Docker Commands
These are the Docker commands you will use daily as a web developer:
docker build -t myapp .— Build an image from the Dockerfile in the current directory and tag it as "myapp"docker run -p 3000:3000 myapp— Run a container from the "myapp" image and map port 3000docker-compose up -d— Start all services defined in docker-compose.yml in detached modedocker-compose down— Stop and remove all containers, networks, and volumesdocker exec -it container_name bash— Open a shell inside a running containerdocker logs container_name— View the output logs of a containerdocker ps— List all running containersdocker images— List all locally available images
Dockerfile Best Practices
Use Multi-Stage Builds
Multi-stage builds allow you to use multiple FROM statements in a single Dockerfile. This is particularly useful for compiled languages or when you need build tools that should not be included in the production image:
# Build stage
FROM node:12-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
The final image only contains Nginx and the compiled static files — no Node.js, no node_modules, no source code. This results in dramatically smaller images (often 10-50x smaller) and a reduced attack surface.
Choose the Right Base Image
Alpine-based images (like node:12-alpine) are significantly smaller than their full counterparts. A standard Node.js image is around 900MB, while the Alpine variant is approximately 90MB. For production images, always prefer Alpine unless you have specific requirements that need a full Linux distribution.
Use .dockerignore
Just as .gitignore prevents unnecessary files from being tracked by Git, .dockerignore prevents unnecessary files from being copied into your Docker image during the build process. A typical .dockerignore for a web project includes:
node_modules
.git
.env
*.md
docker-compose.yml
.dockerignore
Dockerfile
Development Workflow with Docker
Our typical development workflow at StrikingWeb using Docker looks like this:
- Project setup: Clone the repository, run
docker-compose up. The application is running within minutes, regardless of what is installed on the developer's machine. - Development: Source code is mounted as a volume, so changes are reflected immediately. Hot reload works exactly as it does without Docker.
- Testing: Run tests inside the container to ensure they execute in the same environment as CI/CD.
docker-compose exec app npm test - Debugging: Attach to the container for interactive debugging, or view logs with
docker-compose logs -f app. - Deployment: Build a production image, push it to a container registry (Docker Hub, AWS ECR, or Google Container Registry), and deploy the exact same image that was tested.
Common Pitfalls and How to Avoid Them
Slow Builds
If your Docker builds take a long time, the most likely cause is poor layer caching. Always copy dependency files (package.json, composer.json, requirements.txt) before copying application code. This ensures the dependency installation layer is cached and only invalidated when dependencies actually change.
Large Images
Production images should be as small as possible. Use multi-stage builds, Alpine base images, and .dockerignore files. Review your images with docker history myapp to identify which layers are consuming the most space.
Data Persistence
Containers are ephemeral — when a container is destroyed, any data written inside it is lost. Always use named volumes for data that needs to persist across container restarts, such as database files and uploaded content.
Docker is not just about deployment. It is fundamentally about development consistency. The guarantee that every developer on the team is running the same environment eliminates an entire class of bugs and saves hours of debugging time.
Getting Started Today
If you are new to Docker, start small. Take an existing project, write a Dockerfile for it, and run it in a container. Once that feels comfortable, add a docker-compose.yml with your database. Before long, you will wonder how you ever developed without it.
At StrikingWeb, Docker is a core part of our development and deployment infrastructure. We use it across all projects — from single-page applications to complex microservice architectures. If you need help containerizing your application or building a Docker-based development workflow, we are here to help.