You use Docker to build your app image (e.g., myapp:1.0).
You push that image to a registry (e.g., Docker Hub, AWS ECR).
You use Kubernetes to deploy and manage that image across multiple servers (pods, deployments, services).
A lightweight containerization platform that shares the host OS kernel.
OS kernel Shared with the host.
Size Very small (tens or hundreds of MBs).
Startup time: Seconds (lightweight)
Near-native performance (no OS overhead).
Isolation: Process-level isolation (using namespaces & cgroups).
Very portable — “runs anywhere” that supports Docker.
Ideal for microservices, CI/CD, and cloud-native apps.
Examples: Docker, Podman, containerd.
Hardware
└── Host OS
└── Docker Engine
├── Container 1 (App A)
├── Container 2 (App B)
A virtualized system that emulates full hardware and runs its own OS.
Each VM has its own OS kernel.
Large (many GBs, because each has a full OS).
Startup time: Minutes (boots a full OS).
Slower due to virtualization overhead.
Hardware-level isolation (each VM is completely separate).
Less portable — depends on hypervisor and OS compatibility.
Ideal for running multiple different OSes or strong isolation.
Examples: VMware, VirtualBox, KVM, Hyper-V.
Hardware
└── Host OS
└── Hypervisor
├── Guest OS (Ubuntu)
│ └── App A
├── Guest OS (Windows)
│ └── App B
Sends commands to the Docker Daemon using a REST API (via UNIX socket or TCP).
docker build -t myapp .
docker run -p 8080:80 myapp
docker ps
It listens for requests from the client and manages Docker objects.
It can also communicate with other daemons (for remote builds or Swarm clusters).
Responsibilities:
Build and run containers.
Manage images, networks, and volumes.
Handle container lifecycle (create, start, stop, destroy).
Pull and push images from registries (e.g., Docker Hub).
Key Process:
docker build → dockerd → creates image layers
docker run → dockerd → starts container
These are the core building blocks Docker uses to run applications.
Images: Read-only templates used to create containers. Built from a Dockerfile (base OS + dependencies + app code).
Containers: Running instances of images (like a process sandbox). You can start, stop, move, or delete them.
Volumes: Persistent data storage that lives outside the container’s writable layer.
Networks: Allow communication between containers and between containers and the outside world.
A registry stores Docker images.
Public registry: Docker Hub
Automated Builds − Docker Hub can automatically build images from source code hosted on GitHub or Bitbucket.
Webhooks − Allow for the triggering of actions after a successful push to a repository.
Private registries: AWS ECR (Elastic Container Registry), GitHub Container Registry, etc.
When you run: docker pull nginx → Docker pulls that image from the registry to your local machine.
Push to registry → docker push myapp:latest
Let’s say you run this command: docker run nginx
Here’s what happens internally:
Client Command: The Docker Client sends the run command to the Docker Daemon.
Image Check: Daemon checks if the nginx image exists locally. If not, it pulls it from Docker Hub (registry).
Container Creation: Daemon creates a container layer on top of the read-only image layers.
Network Setup: It attaches the container to the default (bridge) network.
Process Start: Daemon starts the container (runs the nginx process inside it).
Container Running: You now have an isolated environment running nginx, using host resources via namespaces and cgroups.
Namespaces: Provide isolation (processes, network, mount points, etc.)
cgroups (Control Groups): Limit and allocate CPU, memory, and I/O for containers
UnionFS (OverlayFS): Layered file system used for images and containers
Container runtime: e.g., runc — actually launches containers based on OCI spec
Dockerfile is a text document with a set of instructions for creating a Docker image -- layer by layer. These instructions describe how to create the basic image, add files and directories, install dependencies, adjust settings, and define the container's entry point.
By specifying the build process in a Dockerfile, you can automate and replicate the image creation process, assuring consistency across environments.
In most cases, an instruction begins with a term such as "FROM" to identify the base image, which is usually a minimal Linux distribution. Commands such as "RUN" are then used to carry out particular operations within a layer.
FROM: Sets the base image on which the new image is going to be built upon.
FROM ubuntu:22
RUN: This will be an instruction that will be executed for running the commands inside the container while building. It typically can be utilized to install an application, update libraries, or do general setup.
RUN apt-get update && apt-get install -y python3
COPY: Copies files and directories from the host machine into the container image.
COPY ./app /app
ADD: Like COPY but more advanced in features like it auto-decompresses archives and fetches files from URLs.
ADD https://example.com/file.tar.gz /app
WORKDIR: Sets the working directory where the subsequent commands in a Dockerfile will be executed.
WORKDIR /app
ENV: Defines environment variables within the container.
ENV FLASK_APP=main.py
EXPOSE: Defines to Docker that the container listens on the declared network ports at runtime.
EXPOSE 8000
EXPOSE is not necessary; however, it is good practice to document your container's network usage. It doesn't publish ports to the host - doing that still requires the use of `-p` when running the container or defining ` ports` in your `docker-compose.yml` file.
docker run -p 8080:80 <your_nginx_image>
CMD: Defines defaults for an executing container. There can only be one CMD instruction in a Dockerfile. If you list more than one CMD, then only the last CMD will take effect.
CMD ["python3", "main.py"]
ENTRYPOINT: Enables the configuration of a container to run the container as an executable.
ENTRYPOINT ["python3", "main.py"]
LABEL: Provides meta-information for an image, like details of the maintainer, version, or description.
LABEL maintainer="johndoe@example.com"
ARG: Defines a variable that allows users to be passed to the builder at build time using the "--build-arg" flag on the docker build command.
ARG version=1
VOLUME: Creates a mount point and assigns the given name to it, indicating that it will hold externally mounted volumes from the native host or other containers.
VOLUME /app/data
USER: Allows the setting of the username (or UID) and optionally the group (or GID) to be used when running that image and for any RUN, CMD, and ENTRYPOINT instructions that follow it in the Dockerfile.
USER johndoe
Use Non-Root User − Run containers with a non-root user to enhance security. It is always a good idea to give a specific user and group in Dockerfile another isolation layer.
Images are immutable (cannot be changed once built).
1️⃣ Base Layer
It's a bare minimum runtime environment and operating system needed to complete your application.
Base images from CentOS, Ubuntu, Debian, and Alpine Linux are frequently used.
2️⃣ Intermediate Layer
Dockerfile instruction, such as RUN, COPY, or ADD, is correlated with each intermediate layer.
Certain application dependencies, configuration files, and other essential elements that supplement the base layer are included in these layers.
3️⃣ Top Layer (aka Application Layer)
This layer contains the actual code for the application as well as any last-minute setups required for it to function.
Cache Layers: designed to reuse previously built layers whenever possible. Reducing the amount of time and computational power needed to create Docker images on a regular basis.
.dockerignore File
To exclude unnecessary files and directories from the build context.
This will speed up builds, prevent sensitive information from being leaked into your image, and avoid the cache being invalidated when these files change.
Tagging
A repository name and a tag combine to form a unique identification for Docker images.
Tags are used to distinguish between various image versions. When no tag is given, Docker uses the "latest" tag by default.
A container is a running copy/instance of an image.
It’s like a lightweight virtual machine that starts fast and shares the host OS.
The image includes:
The base OS layer (e.g., Debian)
Dependencies
Your application code
Metadata (like what command to run)
Containerization (packaging): Includes all of the necessary runtime environments, libraries, and other components needed to run the application.
Isolation: With its file system, network interface, and process space, each container operates independently of the host system as a separate process.
Portability: "Build once, run anywhere"
Docker Volumes: Live in a Docker-managed part of the host filesystem, usually at /var/lib/docker/volumes/ on Linux.
Volumes are the preferred way for persisting data generated by and used in Docker containers -- as it provides portability and ease of management.
Docker manages them and is independent of whatever the host machine's filesystem is.
Data stored in volumes will outlive the lifecycle of a stopped, removed, or replaced container.
Examples:
Databases − The database files of the data should be stored in a volume that will make it persistent across all container restarts.
Web Server Content − Storing website files or user uploads within a volume, so even when the web server container is replaced, they remain accessible.
Application Logs − Store logs in a volume for easy analysis and persistence.
Bind mounts: Can be located anywhere in a host system.
Mount a directory or a file.
tmpfs mounts (Linux) & Named Pipes (Windows): Exists only in the host system's memory.
Mechanisms and configurations that enable Docker containers to communicate with each other (note: Container Linking is deprecated & networking is preferred way) and the outside world.
Docker provides several network drivers:
bridge: The default network driver.
Each bridge network container has an IP address that allows it to communicate between containers on the same network by their IP addresses or names of the containers.
It is also possible for containers to use the host's network connection to obtain access to external networks.
host: Remove network isolation between the container and the Docker host.
The container uses the host's network namespace, IP address, and network ports.
This can be useful if a container needs to access host network interfaces or very low latency is required for network access.
none: Completely isolate a container from the host and other containers.
overlay: Overlay networks connect multiple Docker daemons together.
It is designed for multi-host networking in a Docker swarm cluster.
Allows containers running on different Docker hosts to communicate with each other.
It creates a virtual overlay network across several hosts.
ipvlan: IPvlan networks provide full control over both IPv4 and IPv6 addressing.
macvlan: Assign a MAC address to a container.
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "app.py"]
This says: "Start from a Python base image, copy my app files, install dependencies, and when started, run python app.py."
docker build -t myapp .
Note: The . (dot) at the end means, it looks for ./Dockerfile (case-sensitive, no extension) at the current directory.
Otherwise: docker build -t myapp -f path/to/CustomDockerfile .
Docker reads the Dockerfile → builds an image → saves it locally.
To see all your images with: docker images
Docker run -d -p 8080:80 myapp
-d → run in background (freeing your terminal immediately.)
-p 8080:80 → map container’s port 80 to host port 8080
Docker runs that image in a container → your app is live.
Docker takes the image myapp
Creates a writable layer on top of it
Starts your app as a running process
To see all running containers with: docker ps
Note: ps=process status. In Linux/Unix, the command: ps → shows the processes currently running on the system.
If the container runs in the background, you won’t see output directly.
To check its logs later: docker logs <container_id>
For live logs: docker logs -f <container_id>
Created: Docker reserves the storage volumes and network interfaces that the container needs, but the processes inside the container have not yet begun.
Started (Running): Containers in this state actively use CPU, memory, and other system resources.
Paused (Suspended): Keeps its resource allotments and configuration settings but is not in use. This state helps with resource conservation and debugging by momentarily stopping container execution without completely stopping it.
Exited: Containers can enter this state when they finish the tasks they are intended to complete or when they run into errors that force them to terminate. It keeps its resources and configuration settings but ceasing to run any processes. In this condition, containers can be completely deleted with the docker rm command or restarted with the docker start command.
Dead: Either experienced an irreversible error or been abruptly terminated. When a container is in the "dead" state, it is not in use and the Docker daemon usually releases or reclaims its resources.
Management of multi-container Docker applications.
docker-compose.yml file: Definition of services, networks, and volumes; build context, environment variables, ports to be exposed, and the relationship between services.
Version − Defines the format of the Docker Compose file so that it ensures compatibility with different Docker Compose features.
Services − Contains lists of all services (containers) composing the application. Each service is described with uncounted configuration options.
Configuration Options:
Image − This field specifies the Docker image that should be used for the service.
Build − Specifies the directory for a build context, thus allowing the specification to make an image or not pull from a registry.
Ports − maps host ports to the container.
Volumes − Attach volumes to your service for persistent storage.
Environment − Services environment variables.
Depends_on − Defines service dependencies so they are started in the appropriate order.
Networks − Specifies custom networks for inter-container communication and may specify the configuration options and network drivers.
Configuration Options
driver − This specifies the driver to be used in the network (e.g., bridge, overlay).
driver_opts − Options for the network driver.
ipam − Specifies the IP address management configurations like subnets and IP ranges.
Volumes − Declares shared volumes that are used to allow persistent storage. Volumes can be shared between services or used to store data outside the container's life-cycle.
Configuration Options
External − Indicates whether the volume is created outside Docker Compose.
Driver − Specifies the volume driver to use.
Driver_opts − Options to configure the volume driver.
Ways to Set Environment Variables:
Inline − Register environment variables within your service definition.
env_file − This command allows environment variables to be loaded from an external file.
version: '3.8'
services:
web:
image: nginx:latest
ports:
- "80:80"
volumes:
- web-data:/var/www/html
networks:
- webnet
depends_on:
- database
mysqldb:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- db-data:/var/lib/mysql
networks:
- webnet
postgresdb:
image: postgres:latest
env_file:
- .env
networks:
webnet:
driver: bridge
volumes:
web-data:
db-data:
In .env file −
POSTGRES_USER=myuser
POSTGRES_PASSWORD=mypassword
POSTGRES_DB=mydatabase
docker-compose up − brings up and runs the entire application, as defined in the docker-compose.yml file while creating and starting all the services, networks, and volumes. In addition, if images of this service have never been built, it builds the necessary Docker images.
docker-compose down − stops and removes all the containers, networks, and volumes defined in the `docker-compose.yml` file. It cleans up the resources that your app has taken so far, in the sense that you're sure no residual container or network continues running somewhere.
docker-compose build − builds or rebuilds Docker images for services defined in the docker-compose.yml file. It runs when changes are made in a Dockerfile or source code; new images need to be created.
docker-compose start − will start the already created containers without recreating them, bringing up previously stopped services.
docker-compose stop − stops the currently running containers, without discarding them; thus, it is possible to restart the services later.
docker-compose restart− is useful if you've brought changes to the environment or configuration and want to restart them
docker-compose ps − shows the status of all services defined in the docker-compose.yml file, pointing out containers' statuses, their names, states, and ports.
docker-compose logs − displays the bundle of all logs that define services in docker-compose.yml.
docker-compose exec <service_name> <command> − runs arbitrary commands in a running service container. This can be handy for running system commands inside your application or executing scripts directly within the container.
Example: $ docker-compose exec web bash
Compose does make it simple to deploy and scale applications, but it lacks capabilities needed for very large-scale production environments: such things as load balancing and rolling updates.
Consider using tools like Docker Swarm or Kubernetes for orchestrating and managing large-scale, highly available containerized applications in a production situation.