Introduction
Docker is a great tool for automating the deployment of Linux applications inside software containers, but to really take full advantage of its potential it’s best if each component of your application runs in its own container. For complex applications with a lot of components, orchestrating all the containers to start up and shut down together (not to mention talk to each other) can quickly become unwieldy.
The Docker community came up with a popular solution called Fig, which allowed you to use a single YAML file to orchestrate all your Docker containers and configurations. This became so popular that the Docker team decided to make Docker Compose based on the Fig source, which is now deprecated. Docker Compose makes it easier for users to orchestrate the processes of Docker containers, including starting up, shutting down, and setting up intra-container linking and volumes.
In this tutorial, you will install the latest version of Docker Compose to help you manage multi-container applications, and will explore the basic commands of the software.
Docker and Docker Compose Concepts
Using Docker Compose requires a combination of a bunch of different Docker concepts in one, so before we get started let’s take a minute to review the various concepts involved. If you’re already familiar with Docker concepts like volumes, links, and port forwarding then you might want to go ahead and skip on to the next section.
Docker Images
Each Docker container is a local instance of a Docker image. You can think of a Docker image as a complete Linux installation. Usually a minimal installation contains only the bare minimum of packages needed to run the image. These images use the kernel of the host system, but since they are running inside a Docker container and only see their own file system, it’s perfectly possible to run a distribution like CentOS on an Ubuntu host (or vice-versa).
Most Docker images are distributed via the Docker Hub, which is maintained by the Docker team. Most popular open source projects have a corresponding image uploaded to the Docker Registry, which you can use to deploy the software. When possible, it’s best to grab “official” images, since they are guaranteed by the Docker team to follow Docker best practices.
Communication Between Docker Images
Docker containers are isolated from the host machine, meaning that by default the host machine has no access to the file system inside the Docker container, nor any means of communicating with it via the network. This can make configuring and working with the image running inside a Docker container difficult.
Docker has three primary ways to work around this. The first and most common is to have Docker specify environment variables that will be set inside the Docker container. The code running inside the Docker container will then check the values of these environment variables on startup and use them to configure itself properly.
Another commonly used method is a Docker data volume. Docker volumes come in two flavors — internal and shared.
Specifying an internal volume just means that for a folder you specify for a particular Docker container, the data will be persisted when the container is removed. For example, if you wanted to make sure your log files persisted you might specify an internal /var/log
volume.
A shared volume maps a folder inside a Docker container onto a folder on the host machine. This allows you to easily share files between the Docker container and the host machine.
The third way to communicate with a Docker container is via the network. Docker allows communication between different Docker containers via links
, as well as port forwarding, allowing you to forward ports from inside the Docker container to ports on the host server. For example, you can create a link to allow your WordPress and MariaDB Docker containers to talk to each other and use port-forwarding to expose WordPress to the outside world so that users can connect to it.
How To Install and Use Docker Compose on CentOS 7
Prerequisites
To follow this article, you will need the following:
- CentOS 7 server, set up with a non-root user with sudo privileges
- Docker installed with the instructions from Step 1 and Step 2 of How To Install and Use Docker on CentOS 7
Once these are in place, you will be ready to follow along.
Step 1 — Installing Docker Compose
In order to get the latest release, take the lead of the Docker docs and install Docker Compose from the binary in Docker’s GitHub repository.
Check the current release and if necessary, update it in the command below:
Next, set the permissions to make the binary executable:
Then, verify that the installation was successful by checking the version:
This will print out the version you installed:
docker-compose version 1.23.2, build 1110ad01
Now that you have Docker Compose installed, you’re ready to run a “Hello World” example.
Step 2 — Running a Container with Docker Compose
The public Docker registry, Docker Hub, includes a simple “Hello World” image for demonstration and testing. It illustrates the minimal configuration required to run a container using Docker Compose: a YAML file that calls a single image.
First, create a directory for our YAML file:
Then change into the directory:
Now create the YAML file using your favorite text editor. This tutorial will use Vi:
Enter insert mode, by pressing i
, then put the following contents into the file:
my-test:
image: hello-world
The first line will be part of the container name. The second line specifies which image to use to create the container. When you run the command docker-compose up
it will look for a local image by the name specified, hello-world
.
With this in place, hit ESC
to leave insert mode. Enter :x
then ENTER
to save and exit the file.
To look manually at images on your system, use the docker images
command:
When there are no local images at all, only the column headings display:
REPOSITORY TAG IMAGE ID CREATED SIZE
Now, while still in the ~/hello-world
directory, execute the following command to create the container:
The first time we run the command, if there’s no local image named hello-world
, Docker Compose will pull it from the Docker Hub public repository:
Pulling my-test (hello-world:)...
latest: Pulling from library/hello-world
1b930d010525: Pull complete
. . .
After pulling the image, docker-compose
creates a container, attaches, and runs the hello program, which in turn confirms that the installation appears to be working:
. . .
Creating helloworld_my-test_1...
Attaching to helloworld_my-test_1
my-test_1 |
my-test_1 | Hello from Docker.
my-test_1 | This message shows that your installation appears to be working correctly.
my-test_1 |
. . .
It will then print an explanation of what it did:
. . .
my-test_1 | To generate this message, Docker took the following steps:
my-test_1 | 1. The Docker client contacted the Docker daemon.
my-test_1 | 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
my-test_1 | (amd64)
my-test_1 | 3. The Docker daemon created a new container from that image which runs the
my-test_1 | executable that produces the output you are currently reading.
my-test_1 | 4. The Docker daemon streamed that output to the Docker client, which sent it
my-test_1 | to your terminal.
. . .
Docker containers only run as long as the command is active, so once hello
finished running, the container stops. Consequently, when you look at active processes, the column headers will appear, but the hello-world
container won’t be listed because it’s not running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Use the -a
flag to show all containers, not just the active ones:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
50a99a0beebd hello-world "/hello" 3 minutes ago Exited (0) 3 minutes ago hello-world_my-test_1
Now that you have tested out running a container, you can move on to exploring some of the basic Docker Compose commands.
Step 3 — Learning Docker Compose Commands
To get you started with Docker Compose, this section will go over the general commands that the docker-compose
tool supports.
The docker-compose
command works on a per-directory basis. You can have multiple groups of Docker containers running on one machine — just make one directory for each container and one docker-compose.yml
file for each directory.
So far you’ve been running docker-compose up
on your own, from which you can use CTRL-C
to shut the container down. This allows debug messages to be displayed in the terminal window. This isn’t ideal though; when running in production it is more robust to have docker-compose
act more like a service. One simple way to do this is to add the -d
option when you up
your session:
# docker-compose up -d
docker-compose
will now fork to the background.
To show your group of Docker containers (both stopped and currently running), use the following command:
# docker-compose ps -a
If a container is stopped, the State
will be listed as Exited
, as shown in the following example:
Name Command State Ports
------------------------------------------------
hello-world_my-test_1 /hello Exit 0
A running container will show Up
:
Name Command State Ports
---------------------------------------------------------------
nginx_nginx_1 nginx -g daemon off; Up 443/tcp, 80/tcp
To stop all running Docker containers for an application group, issue the following command in the same directory as the docker-compose.yml
file that you used to start the Docker group:
# docker-compose stop
Note: docker-compose kill
is also available if you need to shut things down more forcefully.
In some cases, Docker containers will store their old information in an internal volume. If you want to start from scratch you can use the rm
command to fully delete all the containers that make up your container group:
# docker-compose rm
If you try any of these commands from a directory other than the directory that contains a Docker container and .yml
file, it will return an error:
ERROR:
Can't find a suitable configuration file in this directory or any
parent. Are you in the right directory?
Supported filenames: docker-compose.yml, docker-compose.yaml
This section has covered the basics of how to manipulate containers with Docker Compose. If you needed to gain greater control over your containers, you could access the filesystem of the Docker container and work from a command prompt inside your container, a process that is described in the next section.
Step 4 — Accessing the Docker Container Filesystem
In order to work on the command prompt inside a container and access its filesystem, you can use the docker exec
command.
The “Hello World” example exits after it runs, so to test out docker exec
, start a container that will keep running. For the purposes of this tutorial, use the Nginx image from Docker Hub.
Create a new directory named nginx
and move into it:
Next, make a docker-compose.yml
file in your new directory and open it in a text editor:
Next, add the following lines to the file:
nginx:
image: nginx
Save the file and exit. Start the Nginx container as a background process with the following command:
Docker Compose will download the Nginx image and the container will start in the background.
Now you will need the CONTAINER ID
for the container. List all of the containers that are running with the following command:
You will see something similar to the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b86b6699714c nginx "nginx -g 'daemon of…" 20 seconds ago Up 19 seconds 80/tcp nginx_nginx_1
If you wanted to make a change to the filesystem inside this container, you’d take its ID (in this example b86b6699714c
) and use docker exec
to start a shell inside the container:
The -t
option opens up a terminal, and the -i
option makes it interactive. /bin/bash
opens a bash shell to the running container.
You will then see a bash prompt for the container similar to:
root@b86b6699714c:/#
From here, you can work from the command prompt inside your container. Keep in mind, however, that unless you are in a directory that is saved as part of a data volume, your changes will disappear as soon as the container is restarted. Also, remember that most Docker images are created with very minimal Linux installs, so some of the command line utilities and tools you are used to may not be present.
How To Install and Use Docker Compose on Ubuntu 20.04
Prerequisites
To follow this article, you will need:
- Access to an Ubuntu 20.04 local machine or development server as a non-root user with sudo privileges. If you’re using a remote server, it’s advisable to have an active firewall installed.
- Docker installed on your server or local machine, following Steps 1 and 2 of How To Install and Use Docker on Linux
Step 1 — Installing Docker Compose
To make sure you obtain the most updated stable version of Docker Compose, you’ll download this software from its official Github repository.
First, confirm the latest version available in their releases page. At the time of this writing, the most current stable version is 2.2.3
.
Use the following command to download:
Next, set the correct permissions so that the docker compose
command is executable:
To verify that the installation was successful, you can run:
You’ll see output similar to this:
Docker Compose version v2.2.3
Docker Compose is now successfully installed on your system. In the next section, you’ll see how to set up a docker-compose.yml
file and get a containerized environment up and running with this tool.
Step 2 — Setting Up a docker-compose.yml
File
To demonstrate how to set up a docker-compose.yml
file and work with Docker Compose, you’ll create a web server environment using the official Nginx image from Docker Hub, the public Docker registry. This containerized environment will serve a single static HTML file.
Start off by creating a new directory in your home folder, and then moving into it:
In this directory, set up an application folder to serve as the document root for your Nginx environment:
Using your preferred text editor, create a new index.html
file within the app
folder:
Place the following content into this file:
Save and close the file when you’re done. If you are using nano
, you can do that by typing CTRL+X
, then Y
and ENTER
to confirm.
Next, create the docker-compose.yml
file:
Insert the following content in your docker-compose.yml
file:
The docker-compose.yml
file typically starts off with the version
definition. This will tell Docker Compose which configuration version you’re using.
You then have the services
block, where you set up the services that are part of this environment. In your case, you have a single service called web
. This service uses the nginx:alpine
image and sets up a port redirection with the ports
directive. All requests on port 8000
of the host machine (the system from where you’re running Docker Compose) will be redirected to the web
container on port 80
, where Nginx will be running.
The volumes
directive will create a shared volume between the host machine and the container. This will share the local app
folder with the container, and the volume will be located at /usr/share/nginx/html
inside the container, which will then overwrite the default document root for Nginx.
Save and close the file.
You have set up a demo page and a docker-compose.yml
file to create a containerized web server environment that will serve it. In the next step, you’ll bring this environment up with Docker Compose.
Step 3 — Running Docker Compose
With the docker-compose.yml
file in place, you can now execute Docker Compose to bring your environment up. The following command will download the necessary Docker images, create a container for the web
service, and run the containerized environment in background mode:
Docker Compose will first look for the defined image on your local system, and if it can’t locate the image it will download the image from Docker Hub. You’ll see output like this:
Creating network "compose-demo_default" with the default driver
Pulling web (nginx:alpine)...
alpine: Pulling from library/nginx
cbdbe7a5bc2a: Pull complete
10c113fb0c77: Pull complete
9ba64393807b: Pull complete
c829a9c40ab2: Pull complete
61d685417b2f: Pull complete
Digest: sha256:57254039c6313fe8c53f1acbf15657ec9616a813397b74b063e32443427c5502
Status: Downloaded newer image for nginx:alpine
Creating compose-demo_web_1 ... done
Note: If you run into a permission error regarding the Docker socket, this means you skipped Step 2 of How To Install and Use Docker on Linux. Going back and completing that step will enable permissions to run docker commands without sudo
.
Your environment is now up and running in the background. To verify that the container is active, you can run:
This command will show you information about the running containers and their state, as well as any port redirections currently in place:
Name Command State Ports
----------------------------------------------------------------------------------
compose-demo_web_1 /docker-entrypoint.sh ngin ... Up 0.0.0.0:8000->80/tcp
You can now access the demo application by pointing your browser to either localhost:8000
if you are running this demo on your local machine, or your_server_domain_or_IP:8000
if you are running this demo on a remote server.
You’ll see a page like this:
The shared volume you’ve set up within the docker-compose.yml
file keeps your app
folder files in sync with the container’s document root. If you make any changes to the index.html
file, they will be automatically picked up by the container and thus reflected on your browser when you reload the page.
In the next step, you’ll see how to manage your containerized environment with Docker Compose commands.
Step 4 — Getting Familiar with Docker Compose Commands
You’ve seen how to set up a docker-compose.yml
file and bring your environment up with docker-compose up
. You’ll now see how to use Docker Compose commands to manage and interact with your containerized environment.
To check the logs produced by your Nginx container, you can use the logs
command:
You’ll see output similar to this:
Attaching to compose-demo_web_1
web_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
web_1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
web_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
web_1 | 10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
web_1 | 10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
web_1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
web_1 | /docker-entrypoint.sh: Configuration complete; ready for start up
web_1 | 172.22.0.1 - - [02/Jun/2020:10:47:13 +0000] "GET / HTTP/1.1" 200 353 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36" "-"
If you want to pause the environment execution without changing the current state of your containers, you can use:
Pausing compose-demo_web_1 ... done
To resume execution after issuing a pause:
Unpausing compose-demo_web_1 ... done
The stop
command will terminate the container execution, but it won’t destroy any data associated with your containers:
Stopping compose-demo_web_1 ... done
If you want to remove the containers, networks, and volumes associated with this containerized environment, use the down
command:
Removing compose-demo_web_1 ... done
Removing network compose-demo_default
Notice that this won’t remove the base image used by Docker Compose to spin up your environment (in your case, nginx:alpine
). This way, whenever you bring your environment up again with a docker compose up
, the process will be much faster since the image is already on your system.
In case you want to also remove the base image from your system, you can use:
Untagged: nginx:alpine
Untagged: nginx@sha256:b89a6ccbda39576ad23fd079978c967cecc6b170db6e7ff8a769bf2259a71912
Deleted: sha256:7d0cdcc60a96a5124763fddf5d534d058ad7d0d8d4c3b8be2aefedf4267d0270
Deleted: sha256:05a0eaca15d731e0029a7604ef54f0dda3b736d4e987e6ac87b91ac7aac03ab1
Deleted: sha256:c6bbc4bdac396583641cb44cd35126b2c195be8fe1ac5e6c577c14752bbe9157
Deleted: sha256:35789b1e1a362b0da8392ca7d5759ef08b9a6b7141cc1521570f984dc7905eb6
Deleted: sha256:a3efaa65ec344c882fe5d543a392a54c4ceacd1efd91662d06964211b1be4c08
Deleted: sha256:3e207b409db364b595ba862cdc12be96dcdad8e36c59a03b7b3b61c946a5741a
Note: Please refer to our guide on How to Install and Use Docker for a more detailed reference on Docker commands.
Conclusion
You’ve now installed Docker Compose, tested your installation by running a “Hello World” example, and explored some basic commands.
While the “Hello World” example confirmed your installation, the simple configuration does not show one of the main benefits of Docker Compose — being able to bring a group of Docker containers up and down all at the same time.
Reference
https://docs.docker.com/compose/install/
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-centos-7
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-20-04
https://phoenixnap.com/kb/install-docker-compose-centos-7