Skip to content
Go back

Some Notes on a Docker Course

Edit page

Docker

Why use Docker?

It is more lightweight, economical, and scalable than a virtual machine.

Docker containers share the host operating system whereas VMs have a guest operating system above the host operating system.

What is Docker?

Image

Container

Docker Client

Docker Server (aka Docker Daemon)

Example of running a command: Docker client e.g. Docker CLI communicates with Docker Server which does the heavy lifting.

docker run image-name e.g. docker run hello-world

What is a container?

Need to understand how OS runs on computer.

Most OSes have a Kernel, a running software process that governs access between all the programs running on your computer and all the physical hardware that is connected to your computer.

Chrome -> System Call -> Kernel -> CPU, Memory, HDD, etc.

Kernel is an intermediate layer that governs access between programs and your actual hardware.

Running programs interact with kernel through system calls, which are just like function invocations or APIs from the OS.

Container includes:

An image is a filesystem snapshot. An image also contains a startup command.

The image is placed inside the containers filesystem or portion of the hard drive assigned to it.

The process is isolated within the container.

A container is a running process along with a subset of physical resources that are allocated to that process specifically. The resources are assigned via name spacing. Control groups limit the amount of resources used per process. These are available as Linux specific features.

Docker is technically a Linux virtual machine. The containers are created within the virtual machine. Inside the virtual machine, we have the Linux kernel that is in charge of limiting access and isolating access to the hardware on my computer.

An image is a snapshot of the file system along with a very specific startup command.

Docker commands

Run a container with image

docker run <image name>

docker run <image name> <command>

The additional commands must exist within the image’s file system, so I cannot use ls on everything, it must be a part of the image filesystem.

List running containers

docker ps

docker ps --all

Container lifecycle

When does a container actually get shut down?

Creating and starting a container are two separate processes.

docker run = docker create + docker start

docker create <image name> docker start <containerId>

docker start -a <containerId>

-a tells Docker to watch for output from the container and print it out to the terminal. Attach to the terminal and give me the output.

Deletes all stopped containers and build cache

docker system prune

Retrieving log outputs

Say you start up a container without the -a flag and don’t want to stop and restart the container with the flag.

docker logs <containerId>

Stopping containers

Some containers will have processes that continue to run, even when detached from terminal output.

docker stop <containerId>

docker kill <containerId>

Multi-command containers

Executing commands in running containers

Outside a container, I have no access to anything inside the container through simple means. For example, if my container is running redis, I cannot use the global redis-cli from my terminal to interact with the redis instance within the container.

docker exec -it <containerId> <command>

it allows me to type input directly into the container

The purpose of the IT flag

-it is actually two separate flags

-i allows me to type input directly into the container

-t allows me to attach to the container’s terminal, makes text show up pretty, all the text I enter and that is output shows up nicely on the screen, otherwise output is pretty bare bones and basic.

Getting a command prompt in a container

docker exec -it <containerId> sh

sh is a command processor or shell

Starting with a shell

docker run -it image-name sh

Container isolation

Building custom images through Docker

Docker file -> Docker client -> Docker server -> Usable image

Most files follow a specific format:

  1. Specify a base image
  2. Run some commands to install additional programs
  3. Specify command to run on container start

Building a docker file

FROM <baseImage> // Install an OS, a base image

RUN <command> // Run commands to install all programs

CMD <command> // Command to run on startup

cd into directory of build file

docker build .

. is the current directory which is the build context. This is the set of files we want to encapsulate in the image.

Each command results in a step in the build process.

  1. Create a base image using FROM command
  2. Docker Daemon downloads the image
  3. Docker server runs the RUN command, it looks for the image from the previous step
  4. That image was used to create a temporary container
  5. The RUN command was executed inside the temporary container
  6. It installed Redis inside the temporary container’s file system
  7. We then took a snapshot of that container’s new file system
  8. We then shut down temporary container and got the image ready for next instruction. We were left with an image of the original base image with Redis installed. We took a snapshot of that file system.
  9. CMD then adds to that container what the image should do when it is started up as a container.
  10. We end up with an image with a modified primary command.
  11. The output from all of the steps is whatever image was generated from the last step.

What is a base image?

Analogy: writing a docker file is like being given a brand new computer with no OS and being told install Google Chrome on it.

  1. Install OS
  2. Install Chrome
  3. Execute Chrome

Rebuilds with cache

Each image has:

Lesson to learn is to put changes below last command so that Docker can always use the cache.

Tagging an image with a name

docker build -t <name> .

-t allows me to tag the image

docker build -t postmac/redis:latest .

The convention for name is:

myDockerId/repoName:version

version can be a number too, but the most recent build is usually latest

Community images have shorter names as they have been open sourced

The . at the end is the build context. It specifies the directory of the files/folders to use for the build.

When running, if I don’t specify the version, then the latest is used by default

docker run postmac/redis

This entire process is called tagging the image.

Technically, the version is the tag.

I can manually do what a Docker file does by starting a container and manually running commands from the container to generate an image that I can use in the future.

Making real projects with Docker

  1. Create NodeJS web app
  2. Create Dockerfile
  3. Build an image from the Dockerfile
  4. Run the image as a container
  5. Connect to web app from a browser

Flow

e.g.

FROM alpine
RUN npm install
CMD ["npm", "start"]

In the Docker ecosystem, alpine is synonymous with a small image. It means the image of whatever it is you’re pulling is as compact as possible.

Many images have alpine versions. It is a very stripped down image. You might get simple programs like ping, ls, a small text editor like nano, etc.

COPY ./ ./ , this would place all my files in the root directory as siblings to bin, etc, home, etc. This is a bad practice.

Port mapping/forwarding

The port forwarding is strictly a run time configuration. In other words, it is something we do when starting up the container. We use the -p flag to specify the port mapping.

-p 8080:8080

docker run -p 5000:8080 <imageId>

Specifying a working directory

WORKDIR <referenceToAppFolder>

Example:

WORKDIR /usr/app

Cache busting and rebuilds

Docker compose with multiple local containers

We have two options when we need to set up some networking between two containers.

  1. Use Docker CLI’s networking features. This is unpleasant as you have to re-run commands each time. This is almost never done.

  2. Use Docker compose. This is much easier to use and is the recommended way to set up networking.

Docker compose

Exists to keep you from having to run repetitive commands using the Docker CLI.

By specifying services in the same Docker compose file, Docker automatically networks them together so that they can communicate.

Docker will automatically resolve the hostname of services defined in the file, e.g. a database. You can also use the full string.

Running

Docker automatically looks for a compose file when running the following command:

docker-compose up

Run in the background:

docker-compose up -d

Rebuild

docker-compose up --build

Stopping containers

Close all running containers from Docker compose docker-compose down

How to deal with containers that crashed

Containers where the app crashed are closed

Must be run from the directory where the docker-compose.yml file is located. It needs the docker compose file to determine which containers to get the status of.

docker-compose ps

Creating a production-grade workflow

Cycle through the following flow:

Development Testing Deploying

Docker is just a tool in the normal development flow, e.g. part of pushing, merging a PR, CI, and deployment.

I will want to create multiple docker files

Running a docker file with a custom name

docker build -f

Because create react app comes with node_modules, and I am installing deps via Dockerfile, I end up with two sets of deps

It is best to delete the node_modules that came with create react app

Getting changes inside my container

I want changes made to my source code to get automatically propagated into my container

Docker volumes

volumes

-v for VOLUME

docker run -p -v /app/node_modules -v $(pwd):/app <imageId>

-v /app/node_modules with no colon, this means we want this to be a placeholder for the folder that is inside the container, don’t map it up to anything. Node modules was already installed in our container and we don’t want to override that with the second command where we are looking at our app directory outside of our container.

-v $(pwd):/app <imageId> with colon, map the pwd to the /app folder inside the container

sort of like port mapping, pwd:container

changes now get propagated to the container from the project directory

Docker Compose

We can use docker compose to make it easier to start up our volumes

Executing tests

docker run -it <containerId> npm run test

This is good for running tests locally

Live updating tests

I could attach to the existing running container that would allow me to run the tests

This might be a nice option for adding as a script to package.json

Docker compose for running tests

Creating another service inside my docker compose file is an option however it isn’t perfect. The test output is mixed with the other services output and I cannot type commands in.

Each process has its own stdin and stdout. It is per process.

Docker attach Attach to container and forward input from my terminal to the container

docker attach <containerId>

Docker attach always connects to the primary process, not any other PIDs in the container, cannot therefore route terminal input into other processes in the container

docker exec -it 1577a221603a sh running ps in the shell will print out all running processes inside the container

PID process id

Need for Nginx

Multi-step build process

  1. Build phase
  2. Run phase

Allows me to use different base images for different phases

Continuous integration and deployment

These steps assume I am in root dir, with GHA, I might need a ../

I could name this whatever I want, e.g. my-image, maybe test-image

docker run postmac/repo-name npm run test

Tips

For docker compose, depending upon the environment I am working in, I may need to specify a specific docker compose file

docker-compose -f docker-compose-dev.yml up

docker-compose -f docker-compose-dev.yml up --build

docker-compose -f docker-compose-dev.yml down

Exposing ports via the Dockerfile

EXPOSE 8080

Some web servers require this but in terms of Docker, it is just for documentation for other developers who look at the Dockerfile

Multi-container application

Env vars

variableName=value

variablename

Multi-container deployments

syntax=docker/dockerfile:1

FROM node:12-alpine
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --production
COPY . .
CMD ["node", "src/index.js"]

I should also copy package-lock.json or yarn.lock on the same line as package.json to lock in the dep versions!


Edit page
Share this post on:

Previous Post
Initial VIM Exploration
Next Post
Often Used VSCode Keyboard Shortcuts