Deploying MySQL Read/Write Splitting with OneProxy
Environment Setup
This laboratory experimetn uses four CentOS 6.6 x86_64 hosts with the following network configuration. All firewall rules and SELinux must be disabled before proceeding.
The topology consists of one master MySQL server, two slave servers, and the OneProxy server. This experiment extends a previous mysql-proxy configuration, so certain baseline setup steps are assumed.
Prerequisites
Ensure master-slave replication is already configured between the MySQL instances. The following IP addresses are used in this guide:
- OneProxy Server: 192.168.19.79
- Master Node: 192.168.19.66
- Slave Node 1: 192.168.19.74
- Slave Node 2: 192.168.19.76
Installing OneProxy
Download and extract the OneProxy binary distribution to the proxy server host (192.168.19.79):
tar xf oneproxy-rhel5-linux64-v5.6-ga.tar.gz -C /usr/local/
cd /usr/local/oneproxy
Create the startup script with the following configuration:
#!/bin/bash
#
# OneProxy Startup Configuration
#
ONEPROXY_HOME=/usr/local/oneproxy
${ONEPROXY_HOME}/oneproxy --keepalive \
--proxy-address=192.168.19.79:3306 \
--admin-address=192.168.19.79:4041 \
--proxy-master-addresses=192.168.19.66:3306@dbpool \
--proxy-slave-addresses=192.168.19.74:3306@dbpool \
--proxy-slave-addresses=192.168.19.76:3306@dbpool \
--proxy-user-list=test/14B1772BA9874974E6ECEA0745EE774930E2EFA3 \
--proxy-part-tables=${ONEPROXY_HOME}/part.txt \
--proxy-found-rows \
--event-threads=4 \
--proxy-group-policy=dbpool:2 \
--proxy-group-security=dbpool:0 \
--proxy-memory-db=root/@::test \
--proxy-memory-engine \
--proxy-sql-autosave=${ONEPROXY_HOME}/sql_firewall.sql \
--log-file=${ONEPROXY_HOME}/oneproxy.log \
--pid-file=${ONEPROXY_HOME}/oneproxy.pid
Key configuration parameters explained:
--keepalive: Automatically monitors and restarts the OneProxy service--proxy-address: The listening address for client connections--admin-address: Management port for administrative operations--proxy-master-addresses: Write-enabled backend node (192.168.19.66)--proxy-slave-addresses: Read-enabled backend nodes (192.168.19.74 and 192.168.19.76)--proxy-user-list: Authentication credentials for proxy clients--proxy-group-policy: Routing policy - value of 2 directs read queries to slaves, falling back to master if unavailable--event-threads: Number of worker threads (set to 4)
Configuring Authentication
Connect to the admin interface to generate the password hash:
chmod +x demo.sh
./demo.sh
mysql -uadmin -pOneProxy -h192.168.19.79 --port=4041
Query the password hash for the test user:
passwd 'redhat';
Record the generated hash and update the --proxy-user-list parmaeter in the startup script accordingly.
Creating Backend User
On the master MySQL server, create the application user that will connect through OneProxy:
/usr/local/mysql/bin/mysql
GRANT ALL ON *.* TO 'test'@'192.168.19.%' IDENTIFIED BY 'redhat';
FLUSH PRIVILEGES;
The username and password must match the credentials configured in the OneProxy startup script.
Starting OneProxy
Restart the proxy service with the updated configuration:
killall -9 oneproxy
./demo.sh
tail -f oneproxy.log
Verify backend status through the admin interface:
mysql -uadmin -pOneProxy -h192.168.19.79 --port=4041
LIST BACKEND;
The output displays all registered backend nodes with their operational status:
+------+--------------------+-----------+--------+--------+----------+--------+---------+------+------+-------+---------+-------+---------+-------+---------+------+------+---------+
| INDX | ADDRESS | TYPE | STATUS | MARKUP | REQUESTS | DEGREE | GROUP | IS_M | IS_S | MFile | MOffset | DFile | DOffset | RFile | ROffset | IO | SQL | Seconds |
+------+--------------------+-----------+--------+--------+----------+--------+---------+------+------+-------+---------+-------+---------+-------+---------+------+------+---------+
| 1 | 127.0.0.1:3306 | RW/Master | UP | 0 | 0 | 0 | | No | No | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
| 2 | 192.168.19.66:3306 | RW/Master | UP | 1 | 0 | 0 | dbpool | No | No | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
| 3 | 192.168.19.74:3306 | RO/Slave | UP | 1 | 0 | 0 | dbpool | No | No | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
| 4 | 192.168.19.76:3306 | RO/Slave | UP | 1 | 0 | 0 | dbpool | No | No | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
+------+--------------------+-----------+--------+--------+----------+--------+---------+------+------+-------+---------+-------+---------+-------+---------+------+------+---------+
All three backend nodes show as UP, with the master node (192.168.19.66) configured as read-write and both slave nodes (192.168.19.74 and 192.168.19.76) configured as read-only.
Testing Read/Write Distribution
Network Packet Capture
To verify that write operations are directed to the master while read operations are distributed to slaves, use tcpdump on each backend server:
Master node capture:
tcpdump -i eth0 -nn -XX ip dst 192.168.19.66 and tcp dst port 3306
Slave node 1 capture:
tcpdump -i eth0 -nn -XX ip dst 192.168.19.74 and tcp dst port 3306
Slave node 2 capture:
tcpdump -i eth0 -nn -XX ip dst 192.168.19.76 and tcp dst port 3306
Functional Testing
Connect to the database through OneProxy:
mysql -utest -predhat -h192.168.19.79
Execute various database operations and observe the packet captures on each backend server. Write operations (INSERT, UPDATE, DELETE) will appear exclusively on the master node, while read operations (SELECT) will be load-balanced across both slave nodes.
This confirms that the read/write splitting configuration is functioning correctly, with OneProxy intelligently routing queries based on the configured policy.