Essential Docker Container Operations and Deployment Patterns
Initializing Containers
Loading a local image archive:
docker load -i debian-base.img
# Output:
# 4e4e2a3f1c5b: Loading layer [==================================================>] 120.5MB/120.5MB
# Loaded image: debian:bullseye
docker image ls
# REPOSITORY TAG IMAGE ID CREATED SIZE
# debian bullseye a3c2bd1b1c9d 6 months ago 124MB
Container creation syntax:
docker container run [options] image_name [command]
Spawning a detached interactive enstance:
docker run -tid debian:bullseye /bin/bash
# 5f8a2b3c4d5e6f7a8b9c0d1e2f3a4b5c6d7e8f9a0b1c2d3e4f5a6b7c8d9e0f1
# -t: Allocates a pseudo-TTY
# -i: Keeps STDIN open
# -d: Runs the process in the background
docker container ls -a
# CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
# 5f8a2b3c4d5e debian:bullseye "/bin/bash" 10 seconds ago Up 9 seconds quirky_admin
Process Dependency
A container remains active only as long as its primary process executes. Once the command finishes, the container exits.
docker run -tid debian:bullseye echo "Hello"
# 1a2b3c4d5e6f7g8h9i0j1k2l3m4n5o6p7q8r9s0t1u2v3w4x5y6z7a8b9c0d1e
docker container ls -a
# CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
# 1a2b3c4d5e6f debian:bullseye "echo Hello" 5 seconds ago Exited (0) 4 seconds ago busy_keller
# Using sleep keeps the process alive temporarily
docker run -tid debian:bullseye sleep 60
To determine the default command defined by the image, inspect its configuration:
docker image inspect debian:bullseye --format='{{.Config.Cmd}}'
Routine Management Commands
List active and inactive instances:
docker container ls -a
docker container ls
Retrieve detailed configuration metadata:
docker container inspect 5f8a2b
Fetch application logs:
docker container logs 1a2b3c
Execute a new shell inside a running instance:
docker exec -ti 5f8a2b bash
root@5f8a2b3c4d5e:/# mkdir /tmp/data
root@5f8a2b3c4d5e:/# exit
# Single command execution
docker exec -ti 5f8a2b ls /tmp
docker exec -ti 5f8a2b ping 192.168.1.1
Remove stopped or force-remove running instances:
docker container rm 1a2b 3c4d 5e6f
docker container rm -f 7g8h
docker container stop 9i0j
docker container rm 9i0j
Control the instance lifecycle:
docker container start|stop|restart <container_id_or_name>
Forcefully terminate a process:
docker container kill 1k2l
# 1k2l
docker container ls -a
# STATUS: Exited (137) 3 seconds ago
Archive and restore filesystems:
# Export filesystem as a tar archive
docker export -o backup.tar 1k2l
# Import archive back as an image
docker import backup.tar
Runtime Configuration Flags
Detached Execution
Without -d, the container process attaches to the current terminal session.
docker run -ti debian:bullseye
root@a1b2c3d4e5f6:/# exit
Custom Identity
Assigning specific names and hostnames:
docker run -tid --name=web_node --hostname=web_node debian:bullseye
docker exec -ti web_node bash
root@web_node:/# exit
Automatic Restarts
Ensure the container starts automatically upon daemon startup or failure:
docker run -tid --name=api_node --hostname=api_node --restart=always debian:bullseye
Port Mapping
Bind container network ports to the host system:
# Specific port binding
docker run -tid --name=web_proxy --hostname=web_proxy -p 8080:80 nginx:latest
# Random ephemeral host port
docker run -tid --name=web_proxy_2 --hostname=web_proxy_2 -P nginx:latest
Environment Variables
Pass configuration parameters dynamically:
docker run -tid --name=db_node --hostname=db_node -e ADMIN_USER=alice debian:bullseye
docker exec -ti db_node bash
root@db_node:/# echo $ADMIN_USER
alice
# Database initialization
docker run -tid --name=mysql_node --hostname=mysql_node \
-e MYSQL_ROOT_PASSWORD=SecRet123 mysql:8.0
docker run -tid --name=postgres_node --hostname=postgres_node \
-e POSTGRES_ROOT_PASSWORD=SecRet456 \
-e POSTGRES_DB=app_db \
-e POSTGRES_USER=app_admin \
-e POSTGRES_PASSWORD=SecRet789 \
postgres:15
Data Persistence
Mount host directories into the container namespace using volumes (-v):
docker run -tid --name=storage_node --hostname=storage_node -v /srv/app_data:/app_data debian:bullseye
docker exec -ti storage_node bash
root@storage_node:/# touch /app_data/file_{1..5}
root@storage_node:/# ls /app_data
# file_1 file_2 file_3 file_4 file_5
Inject custom configuration files:
docker run -tid --name=custom_db --hostname=custom_db \
-e MYSQL_ROOT_PASSWORD=SecRet123 \
-v /srv/mysql/custom.cnf:/etc/mysql/conf.d/custom.cnf \
mysql:8.0
Share volumes across multiple instances:
docker run -tid --name=worker_1 --hostname=worker_1 -v /srv/app_data:/shared debian:bullseye
docker run -tid --name=worker_2 --hostname=worker_2 -v /srv/app_data:/shared debian:bullseye
docker exec -ti worker_1 ls /shared
# file_1 file_2 file_3 file_4 file_5
Network Aliases
Establish local DNS resolution between containers using --link (updates /etc/hosts):
docker run -tid --name=cache_layer debian:bullseye
docker run -tid --name=app_layer --link=cache_layer:cache debian:bullseye
docker exec -ti app_layer bash
root@app_layer:/# ping cache
# PING cache (172.18.0.3) 56(84) bytes of data.
# 64 bytes from cache (172.18.0.3): icmp_seq=1 ttl=64 time=0.120 ms
Resource Constraints
Restrict CPU and memory consumption:
docker run -tid --name=limited_node --cpus=1.5 --memory=1024m debian:bullseye
docker inspect limited_node | grep -i memory
# "Memory": 1073741824,
# "MemoryReservation": 0,
# "MemorySwap": 2147483648,
Architecture Deployment: Scalable WordPress with MySQL Replication
Primary Database Initialization
mkdir -p /srv/wp_cluster/master_db
cat <<EOF > /srv/wp_cluster/master_db/custom.cnf
[mysqld]
server_id=1
log_bin=primary_bin
EOF
docker run -d --name=wp_db_master --hostname=wp_db_master \
--restart=always \
--cpus=1 --memory=1g \
-e MYSQL_ROOT_PASSWORD=SecRet123 \
-v /srv/wp_cluster/master_db/custom.cnf:/etc/mysql/conf.d/custom.cnf \
-v /srv/wp_cluster/master_db/data:/var/lib/mysql \
mysql:8.0
Database Configuration
docker exec -ti wp_db_master bash
root@wp_db_master:/# mysql -uroot -pSecRet123
mysql> CREATE DATABASE site_db CHARSET utf8mb4;
mysql> CREATE USER 'wp_admin'@'%' IDENTIFIED BY 'WPpass456';
mysql> GRANT ALL ON site_db.* TO 'wp_admin'@'%';
mysql> FLUSH PRIVILEGES;
Frontend Application Deployment
docker run -d --name=wp_app_1 --hostname=wp_app_1 \
--restart=always \
--cpus=1.5 --memory=1g \
--link=wp_db_master:db_host \
-e WORDPRESS_DB_HOST=db_host \
-e WORDPRESS_DB_USER=wp_admin \
-e WORDPRESS_DB_PASSWORD=WPpass456 \
-e WORDPRESS_DB_NAME=site_db \
-e WORDPRESS_TABLE_PREFIX=wp_ \
-v /srv/wp_cluster/html:/var/www/html \
-p 8080:80 \
wordpress:latest
Horizontal Scaling
docker run -d --name=wp_app_2 --hostname=wp_app_2 \
--restart=always \
--cpus=1.5 --memory=1g \
--link=wp_db_master:db_host \
-v /srv/wp_cluster/html:/var/www/html \
-p 8081:80 \
wordpress:latest
Load Balancing Endpoint
docker run -d --name=wp_proxy --hostname=wp_proxy \
--restart=always \
--cpus=1 --memory=2g \
--link=wp_app_1:backend1 \
--link=wp_app_2:backend2 \
-v /srv/wp_cluster/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg \
-p 80:8888 \
haproxy:latest
Database Replication Setup
mkdir -p /srv/wp_cluster/slave_db
cat <<EOF > /srv/wp_cluster/slave_db/custom.cnf
[mysqld]
server_id=2
relay_log=relay_bin
EOF
docker run -d --name=wp_db_slave --hostname=wp_db_slave \
--restart=always \
--cpus=1 --memory=1g \
-v /srv/wp_cluster/slave_db/data:/var/lib/mysql \
-v /srv/wp_cluster/slave_db/custom.cnf:/etc/mysql/conf.d/custom.cnf \
--link=wp_db_master:master_host \
-e MYSQL_ROOT_PASSWORD=SecRet123 \
mysql:8.0
Create replication credentials on the primary node:
docker exec -ti wp_db_master bash
root@wp_db_master:/# mysql -uroot -pSecRet123
mysql> CREATE USER 'replication_user'@'%' IDENTIFIED BY 'ReplPass789';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication_user'@'%';
mysql> FLUSH PRIVILEGES;
Dump and transfer data to the replica:
docker exec wp_db_master bash -c "mysqldump -uroot -pSecRet123 --lock-all-tables --master-data=2 --all-databases > /tmp/dump.sql"
docker cp wp_db_master:/tmp/dump.sql ./local_dump.sql
docker cp ./local_dump.sql wp_db_slave:/tmp/dump.sql
Establish replication on the secondary node:
bash docker exec -ti wp_db_slave bash root@wp_db_slave:/# mysql -uroot -pSecRet123
mysql> SOURCE /tmp/dump.sql;
mysql> CHANGE MASTER TO -> MASTER_HOST="master_host", -> MASTER_USER="replication_user", -> MASTER_PASSWORD="ReplPass789", -> MASTER_LOG_FILE="primary_bin.000001", -> MASTER_LOG_POS=156; Query OK, 0 rows affected, 2 warnings (0.02 sec)
mysql> START SLAVE; Query OK, 0 rows affected (0.00 sec)
mysql> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: master_host Master_User: replication_user Master_Port: 3306 Connect_Retry: 60 Master_Log_File: primary_bin.000001 Read_Master_Log_Pos: 156 Relay_Log_File: wp_db_slave-relay-bin.000002 Relay_Log_Pos: 324 Relay_Master_Log_File: primary_bin.000001 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 156 Relay_Log_Space: 540 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: 1 row in set (0.00 sec)