Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Deploying LVS with Keepalived for High-Throughput Network Infrastructure

Tech 1

Distributing incoming network traffic across multiple backend servers mitigates single-point bottlenecks, enhances bandwidth capacity, and ensures continuous service availability. When individual nodes exceed their processing thresholds, clustering mechanisms route requests simultaneously. This architecture maintains operational continuity even during hardware failures, provided sufficient redundant nodes remain online. While layer-7 proxies offer granular URL-based routing and rapid deployment, they typically handle up to 10,000 concurrent connections efficiently. For environments demanding millions of daily page views or handling sustained high-throughput workloads, Linux Virtual Server (LVS) operates at the transport layer, leveraging kernel-level packet modification for superior performance and lower CPU overhead.

IPVS Kernel Module & Administration

The Intelligent Packet Switcher (IPVS) component integrates directly into the Linux kernel tree since version 2.4.24. Because it functions at the OS level, no separate server binary requires compilation; administrators only need user-space utilities to manipulate routing tables. Primary management interfaces include the traditional ipvsadm command-line tool and configuration-driven daemons. The former allows immediate rule injection, while modern implementations often rely on declarative config parsers that synchronize state automatically.

Pre-deployment Environment Matrix

Hostname Internal IP Role OS Version
lvs-dir-01 192.168.50.10 Primary Director CentOS Stream 8
lvs-dir-02 192.168.50.11 Backup Director CentOS Stream 8
app-node-01 192.168.50.21 Real Server Ubuntu 22.04 LTS
app-node-02 192.168.50.22 Real Server Ubuntu 22.04 LTS

System prerequisites mandate disabled firewalld instances and disabled SELinux contexts across all nodes to prevent packet interception. Verify module loading prior to initialization:

modprobe ip_vs
modprobe ip_vs_wrr
lsmod | grep ip_vs

Manual Director Configuration (Direct Routing)

Configure the virtual interface and establish routing rules on the primary director node:

# Assign floating address
ip addr add 192.168.50.100/24 dev eth0

# Flush existing rules
ipvsadm --flush

# Define connection timeouts (tcp, tcp-finish, udp)
ipvsadm --set 30 5 60

# Register virtual service with Weighted Round Robin scheduling
ipvsadm -A -t 192.168.50.100:443 -s wrr -p 120

# Attach backend nodes in Gateway (DR) mode
ipvsadm -a -t 192.168.50.100:443 -r 192.168.50.21:443 -g -w 2
ipvsadm -a -t 192.168.50.100:443 -r 192.168.50.22:443 -g -w 2

# Persist rules and verify state
ipvsadm --save > /etc/sysconfig/ipvsadm.rules
ipvsadm -ln

Expected output confirms active routing entries under the specified protocol and scheduler flags.

Backend Node Adjustments & Address Resolution Control

To prevent MAC address conflicts within the same subnet, each real server must bind the floating IP exclusively to its loopback adapter and restrict ARP broadcasts:

# Bind VIP locally without triggering global network advertisements
ip addr add 192.168.50.100/32 dev lo

# Apply kernel parameters dynamically
cat >> /etc/sysctl.conf <<EOF
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
EOF
sysctl -p

Binding to lo isolates the address from physical interface propagation. Enabling arp_ignore=1 ensures replies only occur when the target matches the receiving interface. Setting arp_announce=2 forces the system to select the most appropriate local address for outgoing queries, effectively masking the floating IP from external ARP sweeps.

Architectural Forwarding Models

LVS supports four distinct forwarding methodologies:

  1. Direct Routing (DR): Modifies Ethernet frame headers. Requests enter the director, which rewrites destination MACs to backend addresses. Backends respond directly to clients. Requires shared Layer-2 segment. Highest throughput efficiency.
  2. Network Address Translation (NAT): Alters IP headers bidirectionally. All inbound and outbound traffic traverses the director node. Suitable for isolated subnets but introduces potential chokepoints under heavy load.
  3. Tunneling (TUN): Encapsulates incoming packets within new IP headers destined for backends. Responses bypass the director entirely. Requires tunnel drivers on all nodes and is typically restricted to IPv4-only deployments.
  4. FullNAT: Performs both Source and Destination NAT. Decouples directors and backends onto separate VLANs, allowing independent addressing schemes while maintaining session tracking internally.

Session Distribution Algorithms

Algorithm Behavior
RR Sequential distribution ignoring current load metrics.
WRR Distributes proportionally based on assigned weight values.
LC Routes to the endpoint with the fewest established sessions.
WLC Dynamic least-connections weighted by server capability ratings.
DH Hash-based destination lookup ensuring consistent IP-to-backend mapping.
SH Hash-based source lookup enforcing client-affinity routing.
Additional algorithms like LBLC optimize cache-heavy environments by tracking geographic or request-source locality patterns.

Automated Failover with Keepalived

Stand-alone directors present single points of failure. Integrating Keepalived introduces VRRP synchronization and automated health monitoring: Install the daemon using the native package manager. Configure primary director /etc/keepalived/keepalived.conf:

vrrp_instance VI_CLUSTER {
    state MASTER
    interface eth0
    virtual_router_id 88
    priority 200
    advert_int 1
    authentication { auth_type PASS auth_pass securekey }
    virtual_ipaddress { 192.168.50.100/24 }
}

virtual_server 192.168.50.100 443 {
    delay_loop 6
    lb_algo wrr
    lb_kind DR
    persistence_timeout 120
    protocol TCP

    real_server 192.168.50.21 443 {
        weight 2
        TCP_CHECK { connect_timeout 5 nb_get_retry 3 }
    }

    real_server 192.168.50.22 443 {
        weight 2
        TCP_CHECK { connect_timeout 5 nb_get_retry 3 }
    }
}

Replica configuration mirrors the block above, adjusting state BACKUP, lowering priority to 100, and retaining identical router IDs and passwords. Restart services post-modification:

systemctl enable --now keepalived
journalctl -u keepalived -f

Verification involves checking IP assignments via ip a s eth0 and confirming active connections using ipvsadm -ln -c. If the primary node crashes, the backup automatically acquires the floating address and resumes traffic handling without manual intervention.

Diagnostic Recommendations

Monitor kernel logs for dropped packets or ARP anomalies. Validate routing tables on both sides before deploying. Ensure time synchronization across cluster members to avoid session expiry mismatches. Review firewall rules blocking multicast traffic required by VRRP advertisements. Regularly audit weight distributions against actual node performance curves to maintain optimal utilization.

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.