What is docker? Docker is a platform for running software that is agnostic to the operating system it runs on. This is extremely useful around solving the 'But it works on my machine' problem. This post will go through an intro to docker and how you can use it. A crucial part to Docker is the container, a Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings
I did a post back in 2018 on containerization and mentioned Docker too! https://jackmckew.dev/episode-6-containerization.html
This post is apart of a series on Docker/Kubernetes, find the other posts at:
- Develop and Develop with Docker
- Develop and Develop Multi Container Applications
- Intro to Kubernetes
- Developing with Kubernetes
- Deploying with Kubernetes
Docker isn't a single program, it's a suite of programs which we'll refer to as an ecosystem. A brief description of all the components that make the ecosystem is:
|Docker Client||The docker client is how we interact with Docker (eg on the command line)|
|Docker Server||The docker server manages running programs inside isolated containers|
|Docker Machine||Docker machine lets us install docker engine on virtual hosts (essentially an operating system inside a operation system)|
|Docker Images||A docker image is a read-only template of instructions on how to create a docker container|
|Docker Hub||Docker hub is a free service provided by Dockeer for finding and sharing container images|
|Docker Compose||Docker compose is how we can run multi-container applications.|
Docker is free to download from
Maintaining Docker in the Command Line
There are many useful commands for interacting with the Docker client.
||Docker run allows us to both create & start a container from either an ID or an image name. If the docker image isn't found on the local PC, it'll attempt to download from Docker Hub|
||This shows all running containers on the current PC|
||This shows all running & stopped containers on the current PC|
||This stops a running container, if it doesn't stop after 10 seconds, it will kill the container|
||This immediately shuts the container without allowing applications to stop inside the container|
A Dockerfile is a read-only template with instructions for creating a Docker container/image. It's composed with a series of commands, along with their given arguments. A straightforward example of a Dockerfile is:
1 2 3 4 5 6 7 8 9
When we create this Docker image with
docker build Dockerfile:
- We use a base starting point of Alpine (a distribution of Linux), if this isn't found on the local PC, it'll download from Docker Hub
- We install a program called
redisusing a pre-installed
- We run
redis-serverinside the container
Normally when we run
docker build it'll return a container ID, which we'll use to run the container with
docker run. Rather than having to copy/paste the container ID each time, we can tag an image with a human-readable name.
To do this we can use the
-t option for
docker build. An example is:
docker build -t [docker_id] / [project_name] : [version]
So for me to tag an image I'd run:
docker build -t jackmckew/my-new-docker-image:latest Dockerfile
Which if we wanted to run this image, we can do this now with
docker run jackmckew/my-new-docker-image
The version is optional to provide, if it isn't provided it will default to latest.
Alpine Docker Images
Note that we used
FROM alpine earlier on in our Dockerfile.
alpine is a term used in Docker to represent the most compressed and stripped down version of an image. If we wanted to use a different image like those listed on Docker Hub (https://hub.docker.com/search?q=&type=image), we could easily specify to get the
alpine version of an image by using
Check the description of an image to check whether an alpine version is available.
Change Working Directory
We can change the working directory inside the container with
WORKDIR. This is very useful if we don't want to copy things into the root directory of the container. If a folder isn't found in the argument to
WORKDIR, it'll be created automatically.
Docker defaults to NOT include any files inside an image from the local PC. We always must mount any files we want to use inside the container in the Dockerfile. One way to do this is using the
COPY argument in the Dockerfile.
This can be done with:
COPY [location_on_local_PC_to_copy] [place_to_story]
An example of this is:
COPY ./ ./, this will copy the current folder relative to where the terminal is, and places it in the current working directory inside the container.
By default no traffic will be routed into a container, meaning a container has it's own set of ports that are not connected to the local PC. Thus we need to set up a mapping between the local PC and the containers ports.
This is not changed within the Dockerfile, but rather when we run the container with the
-p flag. This can be done with:
By default there is no limitation on default traffic getting out of a container, only limitations on traffic getting in. This local PC port and the container port do NOT have to match.
Docker compose is a separate tool apart of the Docker CLI, this is used to start up multiple containers at the same time. This helps automate the arguments that would be need to connect multiple containers to talk to each other.
We need to create
docker-compose.yml files that we can feed into the docker compose CLI, this is essentially replacing the arguments that we've been typing into the Docker CLI previously.
Here's an example of a
docker-compose.yml which will:
- Create an container that will host a redis server
- Create an container that will host a nodejs app
- Network the ports of the nodejs app container
1 2 3 4 5 6 7 8
Let's break it down, first we specify the version of docker-compose to run on. Following that under services, each container that we want to run is indented and then named. Inside the service declaration we can either specify an image to use, or whether to use docker build to build a Dockerfile (note this will look in the current directory for a Dockerfile). Finally mapping port 8000 of the local PC to port 8000 of the docker container.
By using docker-compose, this also automates that the docker containers run on the same network. This would be a major pain to do without using docker-compose.
Once we've created a
docker-compose.yml, we can run this by running the command
docker-compose up inside the same directory. We can also specify that we want to rebuild all the images inside the
docker-compose.yml file with
--build, so the new command becomes
docker-compose up --build.
Stopping Docker Compose
Rather than using
docker stop with each of the IDs of the containers that we created. Luckily, we can simply run
docker-compose down to stop all the containers after using
docker-compose up. This is particularly useful when we run
docker-compose up -d which will run the containers in the background, so we can't just CTRL+C out of it.
If any container crashes, docker-compose defaults to the restart policy of
"no", which never attempts to restart the container. Potential restart policies in docker-compose are:
|"no"||Never restart a container on stop/crash|
|always||If a container stops for any reason at all, attempt to restart|
|on-failure||Only restart if a container stops with an error|
|unless-stopped||Always restart unless forcibly stopped|
The restart policy is defined along with other arguments under the service name.
Check Docker Compose Status
Similarly to running
docker ps, we can run
docker-compose ps to check all the running containers that we started with
This must be used in the same directory as the