What is Docker?

Docker is an open source platform for building, deploying, and managing containerized applications. Learn about containers, how they compare to VMs, and why Docker is so widely adopted and used.

What is Docker?

Docker is an open source containerization platform. It enables developers to package applications into containers—standardized executable components combining application source code with the operating system (OS) libraries and dependencies required to run that code in any environment. Containers simplify delivery of distributed applications, and have become increasingly popular as organizations shift to cloud-native development and hybrid multicloud environments.

Developers can create containers without Docker, but the platform makes it easier, simpler, and safer to build, deploy and manage containers. Docker is essentially a toolkit that enables developers to build, deploy, run, update, and stop containers using simple commands and work-saving automation through a single API.

Docker also refers to Docker, Inc. , the company that sells the commercial version of Docker, and to the Docker open source project , to which Docker, Inc. and many other organizations and individuals contribute.

How containers work, and why they’re so popular

Containers are made possible by process isolation and virtualization capabilities built into the Linux kernel. These capabilities – such as control groups (Cgroups) for allocating resources among processes, and namespaces for restricting a processes access or visibility into other resources or areas of the system – enable multiple application components to share the resources of a single instance of the host operating system in much the same way that a hypervisor enables multiple virtual machines (VMs) to share the CPU, memory and other resources of a single hardware server.

As a result, container technology offers all the functionality and benefits of VMs – including application isolation, cost-effective scalability, and disposability – plus important additional advantages:

  • Lighter weight: Unlike VMs, containers don’t carry the payload of an entire OS instance and hypervisor; they include only the OS processes and dependencies necessary to execute the code. Container sizes are measured in megabytes (vs. gigabytes for some VMs), make better use of hardware capacity, and have faster startup times.
  • Greater resource efficiency: With containers, you can run several times as many copies of an application on the same hardware as you can using VMs. This can reduce your cloud spending.
  • Improved developer productivity: Compared to VMs, containers are faster and easier to deploy, provision and restart. This makes them ideal for use in continuous integration and continuous delivery (CI/CD) pipelines and a better fit for development teams adopting Agile and DevOps practices.

Why use Docker?

Docker is so popular today that “Docker” and “containers” are used interchangeably. But the first container-related technologies were available for years — even decades  — before Docker was released to the public in 2013.

Most notably, in 2008, LinuXContainers (LXC) was implemented in the Linux kernel, fully enabling virtualization for a single instance of Linux. While LXC is still used today, newer technologies using the Linux kernel are available. Ubuntu, a modern, open-source Linux operating system, also provides this capability.

Docker enhanced the native Linux containerization capabilities with technologies that enable:

  • Improved—and seamless—portability: While LXC containers often reference machine-specific configurations, Docker containers run without modification across any desktop, data center and cloud environment.
  • Even lighter weight and more granular updates: With LXC, multiple processes can be combined within a single container. With Docker containers, only one process can run in each container. This makes it possible to build an application that can continue running while one of its parts is taken down for an update or repair.
  • Automated container creation: Docker can automatically build a container based on application source code.
  • Container versioning: Docker can track versions of a container image, roll back to previous versions, and trace who built a version and how. It can even upload only the deltas between an existing version and a new one.
  • Container reuse: Existing containers can be used as base images—essentially like templates for building new containers.
  • Shared container libraries: Developers can access an open-source registry containing thousands of user-contributed containers.

Today Docker containerization also works with Microsoft Windows server. And most cloud providers offer specific services to help developers build, ship and run applications containerized with Docker.

Who uses Docker?

Docker is an open application development framework that’s designed to benefit DevOps and developers. Using Docker, developers can easily build, pack, ship, and run applications as lightweight, portable, self-sufficient containers, which can run virtually anywhere. Containers allow developers to package an application with all of its dependencies and deploy it as a single unit. By providing prebuilt and self-sustaining application containers, developers can focus on the application code and use without worrying about the underlying operating system or deployment system.

Additionally, developers can leverage thousands of open source container applications that are already designed to run within a Docker container. For DevOps teams, Docker lends itself to continuous integration and development toolchains and reduces the constraints and complexity needed within their system architecture to deploy and manage the applications. With the introduction of container orchestration cloud services, any developer can develop containerized applications locally in their development environment, and then move and run those containerized applications in production on cloud services, such as managed Kubernetes services.

What you need to know about Docker community edition

The open source components of Docker are gathered in a product called Docker Community Edition, or docker-ce. These include the Docker engine and a set of Terminal commands to help administrators manage all the Docker containers they are using. You can install this toolchain by searching for docker in your distribution’s package manager.

Docker tools and terms

Some of the tools and terminology you’ll encounter when using Docker include:

DockerFile

Every Docker container starts with a simple text file containing instructions for how to build the Docker container image. DockerFile automates the process of Docker image creation. It’s essentially a list of command-line interface (CLI) instructions that Docker Engine will run in order to assemble the image.

Docker images

Docker images contain executable application source code as well as all the tools, libraries, and dependencies that the application code needs to run as a container. When you run the Docker image, it becomes one instance (or multiple instances) of the container.

It’s possible to build a Docker image from scratch, but most developers pull them down from common repositories. Multiple Docker images can be created from a single base image, and they’ll share the commonalities of their stack.

Docker images are made up of layers, and each layer corresponds to a version of the image. Whenever a developer makes changes to the image, a new top layer is created, and this top layer replaces the previous top layer as the current version of the image. Previous layers are saved for rollbacks or to be re-used in other projects.

Each time a container is created from a Docker image, yet another new layer called the container layer is created. Changes made to the container—such as the addition or deletion of files—are saved to the container layer only and exist only while the container is running. This iterative image-creation process enables increased overall efficiency since multiple live container instances can run from just a single base image, and when they do so, they leverage a common stack.

Docker containers

Docker containers are the live, running instances of Docker images. While Docker images are read-only files, containers are live, ephemeral, executable content. Users can interact with them, and administrators can adjust their settings and conditions using docker commands.

Docker Hub

Docker Hub  is the public repository of Docker images that calls itself the “world’s largest library and community for container images.” It holds over 100,000 container images sourced from commercial software vendors, open-source projects, and individual developers. It includes images that have been produced by Docker, Inc., certified images belonging to the Docker Trusted Registry, and many thousands of other images.

All Docker Hub users can share their images at will. They can also download predefined base images from the Docker filesystem to use as a starting point for any containerization project.

Docker daemon

Docker daemon is a service running on your operating system, such as Microsoft Windows or Apple MacOS or iOS. This service creates and manages your Docker images for you using the commands from the client, acting as the control center of your Docker implementation.

Docker registry

A Docker registry is a scalable open-source storage and distribution system for docker images. The registry enables you to track image versions in repositories, using tagging for identification. This is accomplished using git, a version control tool.

Docker deployment and orchestration

If you’re running only a few containers, it’s fairly simple to manage your application within Docker Engine, the industry de facto runtime. But if your deployment comprises thousands of containers and hundreds of services, it’s nearly impossible to manage that workflow without the help of these purpose-built tools.

Docker Compose

If you’re building an application out of processes in multiple containers that all reside on the same host, you can use Docker Compose to manage the application’s architecture. Docker Compose creates a YAML file that specifies which services are included in the application and can deploy and run containers with a single command. Using Docker Compose, you can also define persistent volumes for storage, specify base nodes, and document and configure service dependencies.

Understanding containers

Container technology can be thought of as three different categories:

  • Builder: a tool or series of tools used to build a container, such as distrobuilder for LXC, or a Dockerfile for Docker.
  • Engine: an application used to run a container. For Docker, this refers to the docker command and the dockerd daemon. For others, this can refer to the containerd daemon and relevant commands (such as podman.)
  • Orchestration: technology used to manage many containers, including Kubernetes and OKD.

Containers often deliver both an application and configuration, meaning that a sysadmin doesn’t have to spend as much time getting an application in a container to run compared to when an application is installed from a traditional source. Dockerhub and Quay.io are repositories offering images for use by container engines.

The greatest appeal of containers, though, is their ability to “die” gracefully and respawn when load balancing demands it. Whether a container’s demise is caused by a crash or because it’s simply no longer needed because server traffic is low, containers are “cheap” to start, and they’re designed to seamlessly appear and disappear. Because containers are meant to be ephemeral and to spawn new instances as often as required, it’s expected that monitoring and managing them is not done by a human in real time, but is instead automated.

Alternatives to Docker

Linux containers have facilitated a massive shift in high-availability computing. There are many toolsets out there to help you run services, or even your entire operating system, in containers. The Open Container Initiative (OCI) is an industry standards organization that encourages innovation while avoiding the danger of vendor lock-in. Thanks to the OCI, you have a choice when choosing a container toolchain, including Docker, CRI-OPodmanLXC, and others.

Container utilities

By design, containers can multiply quickly, whether you’re running lots of different services or you’re running many instances of a few services. Should you decide to run services in containers, you probably need software designed to host and manage those containers. This is broadly known as container orchestration. While Docker and other container engines like Podman and CRI-O are good utilities for container definitions and images, it’s their job to create and run containers, not help you organize and manage them. Projects like Kubernetes and OKD provide container orchestration for Docker, Podman, CRI-O, and more.

When running any of these in production, you may want to invest in support through a downstream project like OpenShift (which is based on OKD.)

Why use Docker

One of the great things about open source is that you have choice in what technology you use to accomplish a task. The Docker engine can be useful for lone developers who need a lightweight, clean environment for testing, but without a need for complex orchestration. If Docker is available on your system and everyone around you is familiar with the Docker toolchain, then Docker Community Edition (docker-ce) is a great way to get started with containers.

Dockerhub and Quay.io are repositories offering images for your container engine of choice. If Docker Community Edition is unavailable or is unsupported, then Podman is a wise option. The effort to ensure open standards prevail is ongoing, so the important long-term strategy for your container solution should be to stick with projects that respect and foster open source and open standards. Proprietary extras may seem appealing at first, but as is usually the case, you lose the flexibility of choice once you commit your tools to a product that fails to allow for migration. Containers can be liberating, as long as they’re liberated.

Kubernetes

To monitor and manage container lifecycles in more complex environments, you’ll need to turn to a container orchestration tool. While Docker includes its own orchestration tool (called Docker Swarm), most developers choose Kubernetes instead.

Kubernetes is an open-source container orchestration platform descended from a project developed for internal use at Google. Kubernetes schedules and automates tasks integral to the management of container-based architectures, including container deployment, updates, service discovery, storage provisioning, load balancing, health monitoring, and more. In addition, the open source ecosystem of tools for Kubernetes—including Istio and Knative—enables organizations to deploy a high-productivity Platform-as-a-Service (PaaS) for containerized applications and a faster on-ramp to serverless computing.

Reference

https://www.ibm.com/cloud/learn/docker

https://opensource.com/resources/what-docker

https://aws.amazon.com/docker/

https://en.wikipedia.org/wiki/Docker_(software)

https://www.oracle.com/vn/cloud-native/container-registry/what-is-docker/

https://www.infoworld.com/article/3204171/what-is-docker-the-spark-for-the-container-revolution.html

Print Friendly, PDF & Email
%d bloggers like this: