Implementing Remote MySQL Database Backups with mysqldump
Core Process Overview
The fundamental procedure involves three sequential steps:
- Export the target database using the
mysqldumputility. - Compress the resulting SQL dump file.
- Secure transfer the compressed archive to a designated remote backup server.
Security and Connectivity Considerations
In typical production architectures, the backup server (local environment) cannot initiate direct connections to the production database server. This necessitates a two-part approach:
- A script on the production server handles database export and compression.
- A script on the backup server initiates the secure transfer to pull the archive.
Security is maintained by:
- Configuring cloud security groups to restrict SSH access to specific IP addresses.
- Utilizing SSH key-based authentication instead of passwords.
Prerequisite Software Installation
Execute the following commands on the producsion server (CentOS/RHEL 7 used as an example):
# Install the mysqldump client utility
sudo yum install -y mariadb
# Install the 7-Zip compression tool
sudo yum install -y p7zip
Production Server Script: Export and Compress
Create a script (e.g., /opt/scripts/db_export.sh) on the production server. This script performs the local dump and compression.
#!/bin/bash
# Configuration Variables
DB_HOST="localhost"
DB_USER="backup_user"
DB_PASS="your_secure_password"
DB_NAME="application_db"
BACKUP_DIR="/var/backups/mysql"
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
DUMP_FILE="${DB_NAME}_${TIMESTAMP}.sql"
ARCHIVE_FILE="${DB_NAME}_${TIMESTAMP}.7z"
# Navigate to backup directory
cd ${BACKUP_DIR}
# Remove old dump files (optional, for cleanup)
rm -f ${DB_NAME}_*.sql 2>/dev/null
rm -f ${DB_NAME}_*.7z 2>/dev/null
# Export the database using mysqldump
mysqldump -h ${DB_HOST} -u ${DB_USER} -p"${DB_PASS}" ${DB_NAME} > ${DUMP_FILE}
# Verify the dump succeeded before compressing
if [ $? -eq 0 ] && [ -f "${DUMP_FILE}" ]; then
echo "Database dump successful: ${DUMP_FILE}"
# Compress the dump file with 7-Zip
7za a -p"${DB_PASS}" ${ARCHIVE_FILE} ${DUMP_FILE}
# Remove the uncompressed SQL file after successful compression
rm -f ${DUMP_FILE}
echo "Archive created: ${ARCHIVE_FILE}"
else
echo "ERROR: Database dump failed." >&2
exit 1
fi
Make the script executable:
sudo chmod +x /opt/scripts/db_export.sh
Backup Server Script: Initiate Transfer and Store
Create a script (e.g., /opt/scripts/fetch_backup.sh) on the remote backup server. This script triggers the production script and retrieves the resulting archive.
#!/bin/bash
# Configuration Variables
PROD_SERVER="192.168.1.100" # Replace with production server IP
PROD_USER="backup_agent"
PROD_SCRIPT_PATH="/opt/scripts/db_export.sh"
PROD_BACKUP_DIR="/var/backups/mysql"
LOCAL_STORAGE_DIR="/mnt/backup_storage/mysql"
# Ensure local storage directory exists
mkdir -p ${LOCAL_STORAGE_DIR}
echo "[$(date)] Initiating remote backup process..."
# 1. Execute the export/compress script on the production server via SSH
ssh ${PROD_USER}@${PROD_SERVER} "${PROD_SCRIPT_PATH}"
# Check if the SSH command was successful
if [ $? -ne 0 ]; then
echo "ERROR: Failed to execute remote backup script." >&2
exit 1
fi
echo "[$(date)] Downloading the latest backup archive..."
# 2. Find the newest .7z file on the production server and copy it locally
# This command finds the most recent file matching the pattern and uses SCP to transfer it.
ssh ${PROD_USER}@${PROD_SERVER} "ls -t ${PROD_BACKUP_DIR}/*.7z 2>/dev/null | head -1" | while read REMOTE_ARCHIVE; do
if [ -n "${REMOTE_ARCHIVE}" ]; then
scp ${PROD_USER}@${PROD_SERVER}:"${REMOTE_ARCHIVE}" ${LOCAL_STORAGE_DIR}/
echo "[$(date)] Backup transferred: $(basename ${REMOTE_ARCHIVE})"
else
echo "ERROR: No backup archive found on the remote server." >&2
fi
done
# Optional: List contents of local backup directory
echo "Current backups in ${LOCAL_STORAGE_DIR}:"
ls -lh ${LOCAL_STORAGE_DIR} | tail -5
echo "[$(date)] Remote backup procedure completed."
Make the script executable:
sudo chmod +x /opt/scripts/fetch_backup.sh
Configuring SSH Key-Based Authentication
To enable password-less SSH access from the backup server to the production server:
-
Generate an SSH key pair on the backup server (if one doesn't exist):
ssh-keygen -t ed25519 -f ~/.ssh/backup_key # Press Enter to accept default location and empty passphrase for automation. -
Copy the public key to the production server:
ssh-copy-id -i ~/.ssh/backup_key.pub ${PROD_USER}@${PROD_SERVER} # You will need to enter the user's password once. -
Test the connection:
ssh -i ~/.ssh/backup_key ${PROD_USER}@${PROD_SERVER} "hostname" -
Update the backup server script to use the specific identity file for encreased security:
# Modify the SSH and SCP commands in fetch_backup.sh ssh -i ~/.ssh/backup_key ${PROD_USER}@${PROD_SERVER} "..." scp -i ~/.ssh/backup_key ...
Automation and Operational Recommendations
- Scheduling: Use
cronon the backup server to executefetch_backup.shat regular intervals (e.g., daily). - Retention Policy: Imlpement a cleanup script on both servers to delete backups older than a defined period (e.g., 30 days).
- Compression: The 7-Zip format (
7za) typically achieves a 60-70% reduction in size for SQL text dumps. - Network Security: Harden SSH configuration on the production server (
/etc/ssh/sshd_config): disable root login, use a non-standard port, and employ tools likefail2ban. - Monitoring: Ansure backup logs are monitored, and implement alerting for job failures.