Skip to main content

Docker 101: Understanding Containers from Scratch

Docker Basics and Docker Compose Explained

Docker Through My Lens

Introduction to Docker

Docker is a platform designed to create, deploy, and run applications inside containers. Containers bundle an application with all its dependencies, ensuring consistency across different environments.

Unlike virtual machines, containers are lightweight and share the host operating system kernel, making them efficient for development, testing, and deployment.

Basic Docker Commands

To start using Docker, here are some essential commands:

  • docker run [image] – Runs a container from the specified image.
  • docker ps – Lists running containers.
  • docker ps -a – Lists all containers, including stopped ones.
  • docker stop [container_id] – Stops a running container.
  • docker rm [container_id] – Removes a container.
  • docker images – Lists available Docker images.
  • docker rmi [image_id] – Removes a Docker image.

Creating Your First Docker Container

You can run a simple container by pulling a test image and running it:

docker run hello-world

This command downloads the hello-world image if it’s not already on your system and runs it, printing a confirmation message.

Dockerfile: Building Your Own Image

A Dockerfile is a text file that contains instructions to build a Docker image. It automates image creation and ensures consistency.

Example Dockerfile for a Node.js application:

# Dockerfile
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]

Explanation:

  • FROM node:18 – Uses the official Node.js image version 18 as the base.
  • WORKDIR /app – Sets the working directory inside the container.
  • COPY package*.json ./ – Copies package files for dependency installation.
  • RUN npm install – Installs Node.js dependencies.
  • COPY . . – Copies the rest of the application files.
  • EXPOSE 3000 – Declares port 3000 for external access.
  • CMD ["node", "index.js"] – Defines the command to run the app.

Building and Running Your Image

Build the image using:

docker build -t my-node-app .

Run the container with:

docker run -d -p 3000:3000 my-node-app

Step-by-Step: Create Your Own Docker Container

Let’s build a simple Docker container step by step using a basic "Hello World" Node.js app.

  1. Create a project directory:
    mkdir my-node-app
    cd my-node-app
  2. Create a simple app file:
    echo "console.log('Hello from my container!');" > index.js
  3. Write a Dockerfile:
    FROM node:18
    WORKDIR /app
    COPY index.js .
    CMD ["node", "index.js"]
  4. Build your Docker image:
    docker build -t my-node-app .
  5. Run the container:
    docker run my-node-app

    You should see the message: Hello from my container!

This simple example shows how you can package a basic Node.js script inside a container and run it anywhere.

Why Use Docker Compose?

Most applications require multiple components, like a web server and a database. Managing these containers individually can be tedious and error-prone.

Docker Compose solves this by defining and running multi-container Docker applications via a simple YAML file (docker-compose.yml).

Compose allows you to start, stop, and manage all related containers as a single service with one command.

Key Docker Compose Commands

CommandDescriptionExample
upCreate, build, and start containersdocker-compose up
startStart existing containersdocker-compose start
stopStop running containersdocker-compose stop
downStop and remove containers, networksdocker-compose down
buildBuild or rebuild containersdocker-compose build

Example: E-Commerce Application Using Docker Compose

Suppose you have an e-commerce site running on an Apache web server and a MySQL database storing customer data.

You could manually run:

docker network create ecommerce
docker run -p 80:80 --name webserver --net ecommerce webserver-image
docker run --name database --net ecommerce -e MYSQL_ROOT_PASSWORD=secret mysql:latest

This approach is repetitive and error-prone if you want to scale or manage containers frequently.

Instead, with Docker Compose, you can define your services in a docker-compose.yml file:

version: '3.3'
services:
  web:
    build: ./webserver
    ports:
      - '80:80'
    networks:
      - ecommerce
  database:
    image: mysql:latest
    environment:
      - MYSQL_ROOT_PASSWORD=helloworld
      - MYSQL_DATABASE=ecommerce
    networks:
      - ecommerce

networks:
  ecommerce:

Running docker-compose up spins up both services connected via the ecommerce network automatically.

Docker Socket Explained

The Docker daemon (which manages containers) listens on a Unix socket file, usually /var/run/docker.sock.

This socket acts like a communication channel between Docker clients (like the CLI) and the Docker service.

Real-World Scenario

In CI/CD pipelines, tools like Jenkins often run inside Docker containers but need to control Docker on the host to build or run containers.

By mounting the Docker socket inside the Jenkins container:

docker run -v /var/run/docker.sock:/var/run/docker.sock jenkins

Jenkins can issue Docker commands to the host’s Docker daemon as if it were running directly on the host.

Security Considerations

  • Exposing the Docker socket to containers is a serious security risk.
  • If compromised, attackers can control all Docker containers and the host system.

Conclusion

Docker and Docker Compose are essential tools that simplify the development and deployment of applications by using containers.

Understanding Dockerfiles, basic commands, and how to manage multi-container applications with Compose helps you build consistent, portable environments.

Finally, understanding internal components like the Docker socket helps when automating container management but always be cautious of the security implications.

Comments