Docker is one of the core technologies of DevOps, which is being used by numerous companies to expedite software development and deployment. It takes a strong grasp of Docker to deploy, scale, and maintain programs on well-known cloud platforms like AWS, Azure, and Google Cloud. Learn the fundamentals of build and deploy applications in DevOps through this Docker tutorial for beginners. Explore our Docker course syllabus to get started.
Docker Introduction for DevOps Beginners
You can automate the deployment of apps within lightweight, portable, and self-sufficient containers with the help of the open-source Docker platform.
With the help of these containers, your application and all of its dependencies, including code, libraries, system tools, and runtime, are packaged into a single unit that can operate reliably on any infrastructure.
- The things you wish to relocate include your program and all of its dependencies, such as libraries, frameworks, system tools, etc.
- Docker cleanly packs everything inside a self-contained container. Your application will have everything it needs to run thanks to this container.
- These containers are portable and light. Without having to worry about incompatibilities, you may effortlessly move them from your development computer to a testing environment and then to a production server.
- Docker containers perform uniformly in many situations, just like shipping containers have uniform sizes and interfaces.
- Just as you can stack several shipping containers on a ship or in a yard, you can have multiple containers operating on the same computer.
- As each container is separate from the others, conflicts between applications are avoided.
Core Concepts of Docker
The underlying operating system and other containers are not connected to this container. There are numerous important advantages to this isolation:
- Consistency: Whether your program is installed on a production cloud, test server, or your laptop, it will function in the same manner.
- Portability: These containers are simple to deploy and move across various computers and cloud service providers.
- Efficiency: Compared to typical virtual machines, containers are substantially faster to start and consume fewer resources because they are lightweight and share the host operating system’s kernel.
- Isolation: Dependency conflicts are avoided by isolating applications running in separate containers from one another.
- Reproducibility: You can reliably recreate the same environment each and every time due to Dockerfiles, which specify how to construct a container image.
Docker makes it simpler to develop, ship, and operate programs reliably anywhere by offering a means of encapsulating an application and its environment.
Recommended: Docker online course program.
Difference Between Containers and Virtual Machines
Here are the key differences of containers and virtual machines:
Feature | Containers | Virtual Machines (VMs) |
Virtualization Level | Operating System level | Hardware level |
Operating System | Shares the host OS kernel | Each VM has its own full OS |
Kernel | Uses the host OS kernel | Has its own kernel |
Resource Usage | Lightweight, uses fewer resources (CPU, RAM) | Resource-intensive, requires more CPU, RAM, disk space |
Boot Time | Very fast (seconds) | Slower (minutes) |
Isolation | Process-level isolation, less complete than VMs | Strong, OS-level isolation |
Portability | Highly portable across different OS and clouds | Less portable, can have hypervisor dependencies |
Size | Small (MBs) | Large (GBs) |
Density | Higher density of applications per host | Lower density of applications per host |
Hypervisor | No dedicated hypervisor required (uses container engine) | Requires a hypervisor (Type 1 or Type 2) |
Use Cases | Microservices, web applications, CI/CD, portability, scalability | Running different OSs, strong isolation, legacy apps |
Docker Architecture
The client-server approach is used in the Docker architecture.
Key Elements of Docker Architecture:
Let’s dissect the main elements and their interactions:
Docker Client
- As a user, you communicate with this command-line interface (CLI) program (docker).
- The Docker client doesn’t carry out docker commands (such as docker run, docker build, and docker pull) immediately when you perform them. Rather, it uses an API to package them and transmit them to the Docker daemon.
- The client can interact with either a distant or local Docker daemon.
Docker Daemon (dockerd):
The host operating system is home to the persistent background process known as the Docker daemon.
Managing all Docker objects, including images, containers, networks, volumes, and more, is its primary responsibility.
When the Docker client makes a request using the Docker API, the daemon listens for it and executes it. It includes:
- Docker image creation and administration.
- Container creation, operation, termination, and deletion.
- Controlling the networks to which containers can connect.
- Controlling the amount of storage that containers consume.
- Communication with Docker registries.
Docker Host:
- The physical or virtual machine that the Docker daemon is operating on is known as the Docker host.
- For the Docker daemon and any containers that are currently running, it supplies the operating system and resources (CPU, RAM, and storage).
Docker Registry:
A Docker registry is a stateless, highly scalable way to store and distribute Docker images.
Consider it a Docker image repository.
The default public registry, Docker Hub, has a substantial library of official and community-contributed images.
For internal use, organizations can also create their own private registries. Registries are used by the Docker client to:
- Pull images: Provide the Docker host with images that are downloaded from a registry.
- Push images: Add pictures to a registry from the Docker host.
Suggested: DevOps course in Chennai.
Docker Interaction Flow
Here is the interaction flow in Docker:
User Issues Command: You use the Docker client to run a Docker command (e.g., docker run ubuntu).
Client Communicates with Daemon: Your command is converted by the Docker client into a Docker API request, which is then sent to the Docker daemon over a network interface or a local socket (often /var/run/docker.sock).
Daemon Processes Request: The request is received and processed by the Docker daemon. In the example of docker run Ubuntu:
- First, the daemon determines whether the Ubuntu image is locally present.
- If the Ubuntu image is not locally available, the daemon retrieves it from the chosen Docker registry (by default, Docker Hub).
- The daemon builds a new container based on the Ubuntu image after it becomes locally available.
- The filesystem, network, and other resources are allotted to the container by the daemon.
- By executing the command given in the image’s configuration (or overridden by the docker run command), the daemon launches the container.
Daemon Manages Objects: The Docker daemon is in charge of managing images and containers at every stage of their lifecycle, including resource separation, networking, and storage.
Client Receives Response: The Docker client receives responses from the Docker daemon and presents the findings to the user.
Recommended: Ansible Course in Chennai.
Working with Docker Images
An essential component of using Docker is working with Docker images. Below is a summary of typical Docker image actions and ideas:
Finding Images:
Docker Hub: The main public registry for Docker images is called Docker Hub. A sizable library of official images (kept by Docker or software manufacturers) and community-contributed images are available for browsing and searching.
Docker search command: Additionally, you may use the docker search command to look for images straight from your terminal.
docker search nginx
This will display a collection of “nginx”-related photos along with details like description, official status, and stars (popularity).
Pulling Images:
An image is downloaded to your local Docker host with the docker pull command from a Docker registry (by default, Docker Hub).
Pulling the latest version: Just enter the image name to retrieve the most recent stable version of the picture:
docker pull ubuntu
Pulling a specific tag: To indicate particular versions or variations, images frequently have tags. The colon (:) can be used to fetch a specific tag:
docker pull ubuntu:20.04
docker pull mysql:5.7
Specifying a registry: You must choose the registry hostname if the image is hosted on a private registry.
docker pull myregistry.example.com/myteam/myimage:latest
Listing Local Images:
The images that are presently stored on your local Docker host are shown via the docker images command.
docker images
Usually, the result contains columns for:
- REPOSITORY: The image’s name.
- TAG: The image’s tag.
- IMAGE ID: An image’s special identification number.
- CREATED: The moment the picture was made.
- SIZE: The image’s dimensions.
Building Images:
A Dockerfile can be used to generate your own Docker images. An image is created by Docker using a set of instructions found in a text file called a Dockerfile.
- Creating a Dockerfile: The Dockerfile provides the command to execute when a container is started, adds your application code and dependencies, and specifies the base image.
- Building the image: The docker build command can be used to generate an image from a Dockerfile. The -t parameter can be used to optionally tag the image:
docker build -t my-custom-app
Using the most recent tag, this command will create an image called my-custom-app. An alternative tag can be specified as well:
docker build -t my-custom-app:v1.0
Tagging Images:
The docker tag command creates a new tag or alias for an existing image. When pushing photos to several repositories or versioning them, this is helpful.
docker tag my-custom-app:v1.0 myregistry.example.com/myteam/my-custom-app:latest
This command adds a new name and tag to the my-custom-app:v1.0 image that is appropriate for pushing to a private registry.
Pushing Images:
A tagged image is uploaded to a Docker registry using the docker push command. If the registry is private, you must be logged in with docker login <registry_hostname>.
docker push myregistry.example.com/myteam/my-custom-app:latest
The repository name for Docker Hub often begins with your Docker Hub username if you’re pushing to your own repository.
Inspecting Images:
A Docker image’s layers, environment variables, entry point, and other details are all provided in JSON format by the docker inspect command.
docker inspect ubuntu:latest
To extract particular data, use the –format option.
Removing Images:
One or more images can be deleted from your local Docker host using the docker rmi (remove image) command. Images can be specified by name and tag or by their ID.
docker rmi <image_id_or_name:tag>
docker rmi my-custom-app:v1.0 ubuntu
Before you may remove an image that is being utilized by one or more operating containers, you must halt and uninstall those containers. To force the removal, you can use the -f or –force option, however this is usually not advised as it may result in data loss if the containers have writable layers.
Image Layers:
- Layers make up Docker images. A new layer is created by each instruction in a Dockerfile.
- The image building process is greatly accelerated by the caching of these layers. If an instruction hasn’t changed since the last build, Docker uses the cached layer.
- As different images can share common layers, this layered architecture also aids in storage space savings.
Base Images:
- The FROM directive is used to begin with a base image when creating a Dockerfile. The operating system and essential tools are included in this base image.
- Official images for different Linux distributions (such as Ubuntu, Alpine, and CentOS) and runtime environments (such as Python, Node.js, and Java) are examples of common base images.
Effective use of Docker in your development and deployment workflows requires knowing how to locate, retrieve, build, tag, push, inspect, and remove Docker images.
Listing and Managing Containers:
- docker ps: Listing present containers.
- docker ps -a: Listing all containers (running and stopped).
- docker stop <container_id_or_name>: Stopping a running container.
- docker start <container_id_or_name>: Starting a stopped container.
- docker restart <container_id_or_name>: Restarting a container.
- docker rm <container_id_or_name>: Removing a stopped container.
- docker logs <container_id_or_name>: Viewing container logs.
- docker exec -it <container_id_or_name> <command>: Executing commands inside a running container.
Suggested: Git training course in Chennai.
Building Your Own Images: Dockerfiles
A Dockerfile can be thought of as a recipe or blueprint for creating a Docker image. It’s a straightforward text file with a set of instructions that Docker follows to put together an image. Your application’s code, runtime, system tools, libraries, and settings will all be included in this image.
What is a Dockerfile?
- Text File: A Dockerfile is fundamentally a simple text file.
- Instructions: It includes a set of commands that tell Docker how to construct the image. A new “layer” is added to the image with each instruction.
- Step-by-Step Process: To produce a final image, Docker successively reads and runs the instructions in the Dockerfile.
- Reproducibility: Regardless of the environment you’re building in, Dockerfiles guarantee that your images are constructed consistently each time. By doing this, the “it works on my machine” issue is resolved.
- Version Control: To keep track of changes and quickly revert to earlier iterations, you can (and should!) maintain your Dockerfiles under version control, such as Git.
Structure of a Dockerfile
Each instruction in a standard Dockerfile is written in uppercase and appears on a separate line. The following are a few of the most popular and essential instructions:
FROM <base_image>[:<tag>]: The first instruction without a comment is always this one. It designates the starting base image. This could be a language runtime image (python, node), a standard OS image (ubuntu, alpine), or another pre-built image.
FROM ubuntu:latest
FROM python:3.9-slim-buster
RUN <command>: A command is carried out in a new layer on top of the existing image by this instruction. Usually, it’s employed for script execution, environment configuration, and software package installation.
RUN apt-get update && apt-get install -y –no-install-recommends curl wget
RUN pip install -r requirements.txt
COPY <src> <dest>: This command moves directories and files from the host computer, where the image is being built, to the designated location within the image.
COPY ./app /app
COPY requirements.txt /app/requirements.txt
ADD <src> <dest>: ADD is comparable to COPY, but it offers a few extra features:
- Compressed archives (tar, gzip, etc.) can be unpacked straight into the destination.
- Files can be retrieved from distant URLs. Generally speaking, you should use COPY for basic file copying and ADD only when you require its additional features.
ADD https://example.com/myapp.tar.gz /app/
WORKDIR <path>: For all next RUN, CMD, ENTRYPOINT, COPY, and ADD commands, this sets the working directory.
WORKDIR /app
RUN python myapp.py
EXPOSE <port> [<port>/<protocol>…]: This specifies the network ports the container’s application will listen on. The main purpose of it is documentation; the port is not actually published. To publish ports, you must use the -p flag while running Docker.
EXPOSE 80/tcp
EXPOSE 443
ENV <key>=<value> …: In the container, this sets environment variables.
ENV PYTHON_VERSION 3.9
ENV PATH /opt/myapp/bin:$PATH
ARG <variable>[=<default_value>]: This specifies build-time variables that the –build-arg argument can be used to send to the docker build command.
ARG USERNAME=guest
RUN echo “Running as: $USERNAME”
VOLUME [“<mount_point>”]: By doing this, a mount point with the given name is created and designated to hold externally mounted volumes from other containers or the native host. This is for data persistence.
VOLUME /data
USER <user>[:<group>]: This instructs the user to execute further commands as. For security purposes, it is advisable to run your program under a non-root user.
USER nobody
CMD [“executable”, “param1”, “param2”] or CMD command param1 param2: This indicates which command should be executed by default when a container is launched from the image. A Dockerfile can only include one CMD instruction.
CMD [“python”, “myapp.py”]
CMD [“node”, “server.js”]
ENTRYPOINT [“executable”, “param1”, “param2”] or ENTRYPOINT command param1 param2: Though ENTRYPOINT specifies the container’s primary executable, it functions similarly to CMD. The ENTRYPOINT command will be concatenated with the arguments that were supplied to the docker run.
ENTRYPOINT [“/bin/my-entrypoint-script”]
To supply default arguments, use CMD in combination with ENTRYPOINT.
LABEL <key>=<value> <key>=<value> …: By doing this, key-value pairs of metadata are added to the image.
LABEL maintainer=”Your Name <[email protected]>”
LABEL version=”1.0″
LABEL description=”My awesome application”
STOPSIGNAL <signal>: This instructs the container to exit by sending a system call signal.
STOPSIGNAL SIGTERM
HEALTHCHECK [–interval=<INTERVAL>] [–timeout=<TIMEOUT>] [–start-period=<DURATION>] CMD <command>: Docker is instructed on how to test a container to see if it is still functional.
HEALTHCHECK –interval=5m –timeout=3s CMD curl -f http://localhost:8080 || exit 1
SHELL [“executable”, “parameters”]: This lets you change the default shell that is used for other commands (like RUN, CMD, and ENTRYPOINT).
Example Docker File:
FROM python:3.9-slim-buster
WORKDIR /app
COPY requirements.txt .
RUN pip install –no-cache-dir -r requirements.txt
COPY . .
EXPOSE 8000
ENV NAME World
CMD [“python”, “app.py”]
Explore all software training courses available at SLA.
Conclusion
Docker tutorial for beginners covers fundamental concepts with examples. It is advised to gain practical experiences with this tutorial. Explore more with our Docker training in Chennai.