Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

MySQL Log Management: Binary, Error, and Slow Query Configuration

Tech May 8 3

Error Log Diagnostics

The error log tracks server startup, shutdown, and critical runtime failures. Locate the active error log path:

SELECT @@GLOBAL.log_error;

To modify the destination, update the instance configuration:

[mysqld]
log_error=/var/log/mysql/instance_error.log

Apply changes by restarting the service:

systemctl restart mysql

Binary Log Architecture

Binary logs (binlog) serve dual purposes: replication streaming and point-in-time recovery (PITR). Enable binary logging by specifying a unique server identifier and log path:

server_id=101
log_bin=/var/lib/mysql/binlogs/mysql-bin
binlog_format=ROW

Prepare the filesystem with appropriate ownership:

mkdir -p /var/lib/mysql/binlogs
chown -R mysql:mysql /var/lib/mysql/binlogs
chmod 750 /var/lib/mysql/binlogs

Recording Semantics

Binary logging operates at the SQL layer, capturing modification events while excluding SELECT statements. The logging behavior varies by statement type:

  • DDL/DCL: Logged as statement text (statement-based)
  • DML: Captures row modifications (row-based when binlog_format=ROW)

Three format modes exist:

  • STATEMENT: Logs exact SQL text. Compact but non-deterministic for certain operations (UUID(), NOW()).
  • ROW: Records before/after row images. Verbose but guarantees consistency. Required for Group Replication and advanced HA topologies.
  • MIXED: Hybrid approach at MySQL's discretion.

Event Structure

Binlog files contain discrete events—the smallest logical unit. A transaction comprising multiple statements generates distinct events:

BEGIN              pos 120  -> 185
UPDATE products    pos 185  -> 310
INSERT orders      pos 310  -> 425
COMMIT             pos 425  -> 512

Each event contains:

  1. Header with event type and timestamp
  2. Event-specific payload (query text or row images)
  3. Trailer with checksum

Positions mark byte offsets within the file, enabling granular extraction.

Log Inspection

Verify binary logging status:

SHOW VARIABLES LIKE 'log_bin%';
SHOW MASTER STATUS;

List available log files:

SHOW BINARY LOGS;

Rotate to a new log file manually:

FLUSH LOGS;

Examine event metadata:

SHOW BINLOG EVENTS IN 'mysql-bin.000042' LIMIT 50;

For detailed row content analysis, use the command-line decoder:

mysqlbinlog --base64-output=DECODE-ROWS -vv mysql-bin.000042

Filter by specific database:

mysqlbinlog --database=sales mysql-bin.000042

Recovery Procedure

Consider a scenario requiring data restoration:

CREATE DATABASE sales CHARSET utf8mb4;
USE sales;
CREATE TABLE customers (id INT PRIMARY KEY, name VARCHAR(100));
INSERT INTO customers VALUES (101, 'Acme Corp'), (102, 'Global Tech');
COMMIT;

Following an accidental deletion:

DROP DATABASE sales;

Identify the event boundaries:

SHOW BINLOG EVENTS IN 'mysql-bin.000042';

Extract the relevant segment using position markers:

mysqlbinlog --start-position=894 --stop-position=1567 /var/lib/mysql/binlogs/mysql-bin.000042 > /tmp/recovery.sql

Apply the extracted log while suppressing new binary log generation:

SET SESSION sql_log_bin = 0;
SOURCE /tmp/recovery.sql;
SET SESSION sql_log_bin = 1;

Global Transaction Identifiers (GTID)

GTID assigns a unique identifier to every committed transaction, formatted as UUID:sequence_number. The UUID derives from auto.cnf (e.g., a1b2c3d4-e5f6-7890-abcd-ef1234567890), while the sequence increments monotonically per source.

Key characteristics:

  • DDL and DCL statements generate individual GTIDs
  • DML transactions (BEGIN..COMMIT) share a single GTID
  • Idempotency: Servers skip already-executed GTIDs during recovery or replication

Enable GTID mode:

[mysqld]
gtid_mode=ON
enforce_gtid_consistency=ON

After restarting, observe GTID sets:

SHOW MASTER STATUS;
-- Executed_Gtid_Set: a1b2c3d4-e5f6-7890-abcd-ef1234567890:1-5

GTID-Based Recovery

To restore specific transactions:

mysqlbinlog --include-gtids='a1b2c3d4-e5f6-7890-abcd-ef1234567890:3-7' mysql-bin.000043 > /tmp/gtid_restore.sql

However, direct application fails due to idempotency checks if GTIDs 3-7 already exist. Bypass this by stripping GTID information:

mysqlbinlog --skip-gtids --include-gtids='a1b2c3d4-e5f6-7890-abcd-ef1234567890:3-7' mysql-bin.000043 > /tmp/gtid_restore.sql

Then apply:

SET sql_log_bin=0;
SOURCE /tmp/gtid_restore.sql;
SET sql_log_bin=1;

Additional filtering options:

  • --exclude-gtids: Omit specific transaction ranges
  • --skip-gtids: Convert GTID events to anonymous transactions for legacy compatibility

Log Maintenance

Automated Purging

Configure automatic expiration to prevent disk exhaustion:

-- For MySQL 8.0+
SET GLOBAL binlog_expire_logs_seconds = 604800;  -- 7 days

-- Legacy approach
SET GLOBAL expire_logs_days = 7;

Permanent configuration:

[mysqld]
binlog_expire_logs_seconds = 1209600  -- 14 days

Retention best practice: Maintain binlogs covering atleast two full backup cycles plus one day.

Manual Cleanup

Remove logs older than a specific time:

PURGE BINARY LOGS BEFORE DATE_SUB(NOW(), INTERVAL 3 DAY);

Purge up to a specific file (exclusive):

PURGE BINARY LOGS TO 'mysql-bin.000038';

Avoid manual rm commands on binlog files; this corrupts the index. If accidental deletion occurs:

  1. Stop MySQL
  2. Temporarily disable log_bin in configuration
  3. Start MySQL to rebuild internal state
  4. Stop again, re-enable binlog, restart

Warning: Executing RESET MASTER on a replication source destroys the binary log index and breaks replication topology.

Log Rotation Mechanics

New binlog files generate under these conditions:

  • Manual execution of FLUSH LOGS
  • Server restart
  • File size exceeds max_binlog_size (default 1GB)
  • Backup utilities invoking FLUSH LOGS with lock tables

Slow Query Analysis

The slow query log identifies statements exceeding execution thresholds, serving as the primary optimization target source.

Enable comprehensive logging:

[mysqld]
slow_query_log=1
slow_query_log_file=/var/lib/mysql/slow_queries.log
long_query_time=1.0
log_queries_not_using_indexes=1

Simulate problematic queries:

SELECT * FROM large_table WHERE unindexed_column LIKE '%search%';
SELECT * FROM orders WHERE order_date > DATE_SUB(CURDATE(), INTERVAL 2 YEAR);

Analysis Tools

Summarize slow log patterns with built-in tools:

mysqldumpslow -s t -t 10 /var/lib/mysql/slow_queries.log

Parameters:

  • -s t: Sort by total time
  • -s c: Sort by execution count
  • -t 10: Display top 10 entries

For advanced analysis, utilize Percona Toolkit:

pt-query-digest /var/lib/mysql/slow_queries.log

This generates detailed statistics including query fingerprinting, execution histograms, and index usage recommendations.

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.