From chroot to orchestration
Dive into this guide that will take you from the basics of Docker to mastering Kubernetes. By the end of this journey, you'll have a solid understanding of how to create, manage, and orchestrate containers for your applications.
To start your container journey, the first step is pulling Docker images using the docker pull command:
docker pull <name>:<version>
Have you heard of nested virtualisation? It makes deep images seem like a great idea. Dive in and explore the possibilities.
To run an Ubuntu container interactively, use the following command:
docker run -it --name docker-host --rm --privileged ubuntu:bionic
chroot is an operation that changes the apparent root directory for the current running process and its children. You can set a new root to the current directory using:
chroot . bash
bin/bash into a bin folder in the chroot location.ldd bin/bash and copy and paste them into the chroot.exit to move up.Remember that mkdir blah/lib{,64} can create lib and lib64 directories. One issue to be aware of is that you can still see processes across running images using ps aux.
To connect to a running container, use:
docker exec -it <name> bash
To create a chroot environment easily, you can use debootstrap:
debootstrap --variant=minbase bionic /better-root
To hide external resources from from chroot, you can use unshare
unshare --mount --uts --ipc --net --pid --fork --map-root-user chroot /better-root bash
Here's how you can get started with a raw Ubuntu environment:
apt-get update \
&& apt-get install -y debootstrap cgroup-tools htop \
&& debootstrap --variant=minbase bionic /better-root \
&& cgcreate -g gpu,memory,blkio,devices,freezer:/sandbox \
&& unshare --mount --uts --ipc --net --pid --fork --map-root-user chroot /better-root bash
Control groups (cgroup-tools) allow you to limit resources like memory and CPU per chroot:
cat /sys/fs/cgroup/cpu/sandbox/taskscat /sys/fs/cgroup/cpu/sandbox/cpu.sharescgset -r cpu.cfs_period_=100000 -r cpu.cfs_quota_us=$[ 5000 * $(getconf _NPROCESSORS_ONLN) ] sandboxcgset -r memory.limit_in_bytes=80M sandboxRunning processes indefinitely can be achieved using commands like hack yes or yes > /dev/null to run processes in the background.
To connect to an existing Docker image and chroot into it, follow these steps:
docker run --rm -dit --name my-alpine alpine:3.10 sh
docker export -o dockercontainer.tar my-alpine
mkdir container-root
tar xf dockercontainer.tar container-root
ash for Alpine):unshare --mount --uts --ipc --net --pid --fork --map-root-user chroot /better-root ash
Running Docker images with various useful flags:
docker run -it alpine:3.10
docker image prune # Clean up dangling images
| Flag | Description | |-----------------------|----------------------------------------------| | --interactive, -i | Keep STDIN open even if not attached | | --tty, -t | Allocate a pseudo-TTY | | --detach, -d | Run in the background | | --name | Set the name of the image | | -rm | Automatically remove the container on exit |
Other Docker commands and tips include:
docker attach to connect to a detached Docker image.docker history node:12-stretch to view the change log for an image.docker top <container id/name> to list processes in a container.docker search <query> to search for Docker images.docker pause|unpause|restart for managing container states.Creating a Dockerfile to build a Node.js application:
FROM node:20-alpine
CMD ["node", "-e", "console.log('omg hello!')"]
To build an image:
docker build .
docker build --tag my-node-app .
The --init flag is a hack to allow stopping containers without setting up sigterm (do it in the Dockerfile).
A Node.js application that listens on port 3000:
const http = require('http');
process.on('SIGTERM', () => process.exit());
http
.createServer((req, res) => {
console.log('request received');
res.end('omg hello!', 'utf-8');
})
.listen(3000);
console.log('started!');
Dockerfile for the Node.js application:
FROM node:20-alpine
COPY index.js index.js
CMD ["node", "index.js"]
To run the application in a Docker container:
docker run --init --rm --publish 3000:3000 my-node-app
Bind mounts allow you to share files between your local file system and a container. Simply specify the directory to mount when running a container. For example:
docker run -v /local/path:/container/path my-image
Volumes are managed by Docker and are suitable for production use. To use volumes, specify the --mount flag:
docker run --mount type=volume,src=volume-name,dst=/container/path my-image
Docker makes it easy to create development environments using .devcontainer/Dockerfile and devcontainer.json configurations. Customize your development environment with necessary tools and settings.
Docker provides multiple networking options. You
can create your Docker network using docker network create and connect containers to it. Alternatively, use Docker Compose to manage multiple containers' networking.
Docker Compose allows you to define and run multi-container applications. Use a docker-compose.yml file to specify services, volumes, and networks. For instance, a basic setup might look like this:
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/home/node/code
- /home/node/code/node_modules
links:
- db
environment:
- MONGO_CONNECTION_STRING=mongodb://db:27017
db:
image: mongo:3
You can scale containers using docker-compose up --scale and set up load balancing with Nginx.
Kubernetes is a powerful container orchestration tool that provides:
To start using Kubernetes:
Install kubectl and kompose (for converting Docker Compose to Kubernetes).
Create your Kubernetes configuration files or convert your Docker Compose file using kompose.
Apply your configuration to Kubernetes using kubectl apply.
Manage your Kubernetes clusters with kubectl.
Check out other containerization technologies like OCI containers, Buildah, and Podman as alternatives to Docker.
For more details and code examples, check out the course I followed to get here complete guide repository and the documentation for it.