High Availability Load Balancer

Suppose you really need a highly available service. Normally you would put a load balancer in front of a server farm. But what happens when the load balancer fails? Well, we could have to load balancers, one which is actually active and the second one kicking in when first one fails.

We could have two servers or two VMs that are in the same VLAN, they both have their own IP, they are able to communicate/broadcast to each other and there’s a third IP that is shared between the two servers/VMs. One of them is MASTER the other one is BACKUP.

For a test setup I used CentOS 7, you could use your own favorite distro. I’ve set-up on my test machines with 192.168.6.110/24 and 192.168.6.120/24 as the machine’s IPs. I’m going to set two clustered/highly available IPs 192.168.7.100/24 and 192.168.7.110/24. Notice they’re from different classes, machine’s IPs could be anything as long as the two machines can communicate with each other. The default gateway on the two machines should be the one for the clustered IP.

I first installed some basic tools, on both machines

yum install net-tools wget vim mtr

Then, on both I installed the daemon that handles the IP sharing/changing.

yum install keepalived

You could use any software you prefer as a load balancer, it could either be nginx, haproxy, pound etc. In order for your preferred load balancer software to be able to bind/listen to the shared IP you need some kernel configuration. For this we add to /etc/sysctl.conf the folowing line

net.ipv4.ip_nonlocal_bind=1

then “sysctl -p”  will activate the setting.

For the demonstration sake I just stopped and disabled firewalld, though, for production machines you should consider implementing a firewall (honestly I prefer the classic iptables-services for CentOS 7).

Then we need to configure the keepalived daemon. On the first machine “/etc/keepalived/keepalived.conf” file looks like this:

global_defs {
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 52
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.7.100/24
        192.168.7.101/24
    }

    virtual_routes {
        default via 192.168.7.1 dev eth0
        }
}

on the second:

global_defs {
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 52
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.7.100/24
        192.168.7.101/24
    }

    virtual_routes {
        default via 192.168.7.1 dev eth0
        }
}

then on both machines we need to enable and start the daemon:

systemctl enable keepalived.service
systemctl start keepalived.service

Now we can ping one of the clustered IP, it’s alive.

To further check ip addr show would like this on the first machine:

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:62:b1:58 brd ff:ff:ff:ff:ff:ff
    inet 192.168.6.110/24 brd 192.168.6.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 192.168.7.100/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet 192.168.7.101/24 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe62:b158/64 scope link
       valid_lft forever preferred_lft forever

and like this, on the second one:

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:02:88:bd brd ff:ff:ff:ff:ff:ff
    inet 192.168.6.120/24 brd 192.168.6.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe02:88bd/64 scope link
       valid_lft forever preferred_lft forever

Then ip route show would look like this, on the active node:

default via 192.168.7.1 dev eth0  proto static  metric 100
192.168.6.0/24 dev eth0  proto kernel  scope link  src 192.168.6.110  metric 100
192.168.7.0/24 dev eth0  proto kernel  scope link  src 192.168.7.100

and like this on the second one:

192.168.6.0/24 dev eth0  proto kernel  scope link  src 192.168.6.120  metric 100

a tail of /var/log/messages would look like this:

Aug 12 21:17:41 KA1 systemd: Started LVS and VRRP High Availability Monitor.
Aug 12 21:17:41 KA1 Keepalived_vrrp[2010]: Netlink reflector reports IP 192.168.6.110 added
Aug 12 21:17:41 KA1 Keepalived_vrrp[2010]: Netlink reflector reports IP fe80::5054:ff:fe62:b158 added
Aug 12 21:17:41 KA1 Keepalived_healthcheckers[2009]: Netlink reflector reports IP 192.168.6.110 added
Aug 12 21:17:41 KA1 Keepalived_vrrp[2010]: Registering Kernel netlink reflector
Aug 12 21:17:41 KA1 Keepalived_healthcheckers[2009]: Netlink reflector reports IP fe80::5054:ff:fe62:b158 added
Aug 12 21:17:41 KA1 Keepalived_healthcheckers[2009]: Registering Kernel netlink reflector
Aug 12 21:17:41 KA1 Keepalived_vrrp[2010]: Registering Kernel netlink command channel
Aug 12 21:17:41 KA1 Keepalived_vrrp[2010]: Registering gratuitous ARP shared channel
Aug 12 21:17:41 KA1 Keepalived_vrrp[2010]: Opening file '/etc/keepalived/keepalived.conf'.
Aug 12 21:17:41 KA1 Keepalived_vrrp[2010]: Configuration is using : 62972 Bytes
Aug 12 21:17:41 KA1 Keepalived_vrrp[2010]: Using LinkWatch kernel netlink reflector...
Aug 12 21:17:41 KA1 Keepalived_vrrp[2010]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
Aug 12 21:17:41 KA1 Keepalived_healthcheckers[2009]: Registering Kernel netlink command channel
Aug 12 21:17:41 KA1 Keepalived_healthcheckers[2009]: Opening file '/etc/keepalived/keepalived.conf'.
Aug 12 21:17:41 KA1 Keepalived_healthcheckers[2009]: Configuration is using : 6203 Bytes
Aug 12 21:17:41 KA1 Keepalived_healthcheckers[2009]: Using LinkWatch kernel netlink reflector...
Aug 12 21:17:42 KA1 Keepalived_vrrp[2010]: VRRP_Instance(VI_1) Transition to MASTER STATE
Aug 12 21:17:42 KA1 Keepalived_vrrp[2010]: VRRP_Instance(VI_1) Received lower prio advert, forcing new election
Aug 12 21:17:43 KA1 Keepalived_vrrp[2010]: VRRP_Instance(VI_1) Entering MASTER STATE
Aug 12 21:17:43 KA1 Keepalived_vrrp[2010]: VRRP_Instance(VI_1) setting protocol VIPs.
Aug 12 21:17:43 KA1 Keepalived_vrrp[2010]: VRRP_Instance(VI_1) setting protocol Virtual Routes
Aug 12 21:17:43 KA1 Keepalived_vrrp[2010]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.7.100
Aug 12 21:17:43 KA1 Keepalived_vrrp[2010]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.7.101
Aug 12 21:17:43 KA1 Keepalived_healthcheckers[2009]: Netlink reflector reports IP 192.168.7.100 added
Aug 12 21:17:43 KA1 Keepalived_healthcheckers[2009]: Netlink reflector reports IP 192.168.7.101 added
Aug 12 21:17:43 KA1 NetworkManager[436]: <info>  Policy set 'eth0' (eth0) as default for IPv4 routing and DNS.
Aug 12 21:17:48 KA1 Keepalived_vrrp[2010]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.7.100
Aug 12 21:17:48 KA1 Keepalived_vrrp[2010]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.7.101

There you have it, not that hard 🙂

If you switch off the first machine you’ll notice that the clustered IP will be raised automatically by the second node. When the first machine comes back online it goes back to it.

One thought on “High Availability Load Balancer

Leave a Reply

Please type the characters of this captcha image in the input box

Please type the characters of this captcha image in the input box
Name *
Email *
Website