Configuring load balancing on InfoWatch Traffic Monitor

Configuring load balancing on InfoWatch Traffic Monitor

What to do if the capacity of one server is not enough to process all requests, and the software manufacturer does not provide load balancing? There are many options, from buying a load balancer to limiting the number of requests. Which one is correct, you need to look at the situation, taking into account the existing conditions. In this article, we will tell you what can be done if the budget is limited and there is a free server available.

As a system for which it was necessary to reduce the load on one of the servers, we chose DLP (Information Leak Prevention System) from InfoWatch. A feature of the implementation was the placement of the balancer function on one of the "combat" servers.

One of the problems we encountered is the inability to use Source NAT (SNAT). Why this was necessary and how the problem was solved, we will tell further.

So, initially the logic diagram of the existing system looked like this:

Configuring load balancing on InfoWatch Traffic Monitor

ICAP, SMTP traffic, events from user computers were processed on the Traffic Monitor (TM) server. At the same time, the database server easily coped with the load after processing events on the TM, but the load on the TM itself was large. This was visible in the appearance of the message queue on the Device Monitor (DM) server, as well as in the processor and memory usage on the TM.

At first glance, if one more TM server was added to this scheme, then either ICAP or DM could be switched to it, but we decided not to use this method, since fault tolerance was reduced.

Solution Description

In the process of finding a suitable solution, we settled on free software keep alive with LVS. Since keepalived solves the problem of creating a failover cluster, and can also manage the LVS balancer.

What we wanted to achieve (reduce the load on TM and maintain the current level of fault tolerance) should have worked like this:

Configuring load balancing on InfoWatch Traffic Monitor

During the health check, it turned out that the RedHat custom assembly installed on the servers does not support SNAT. In our case, we planned to use SNAT so that incoming packets and responses to them were sent from the same IP address, otherwise we would get the following picture:

Configuring load balancing on InfoWatch Traffic Monitor

This is unacceptable. For example, a proxy server, sending packets to a Virtual IP (VIP) address, will expect a response from the VIP, but in this case it will come from IP2 for sessions sent to backup. The solution was found: it was necessary to create another routing table on backup and connect two TM servers with a separate network, as shown below:

Configuring load balancing on InfoWatch Traffic Monitor

Setting

We implement a scheme of two servers with ICAP, SMTP, TCP 9100 services and a load balancer installed on one of them.

We have two RHEL6 servers, from which the standard repositories and some of the packages have been removed.

Services we need to balance:

β€’ ICAP - tcp 1344;

β€’ SMTP - tcp 25.

Traffic transfer service from DM - tcp 9100.

First we need to plan the network.

Virtual IP address (VIP):

β€’ IP: 10.20.20.105.

Server TM6_1:

β€’ External IP: 10.20.20.101;

β€’ Internal IP: 192.168.1.101.

Server TM6_2:

β€’ External IP: 10.20.20.102;

β€’ Internal IP: 192.168.1.102.

Then we enable IP forwarding on the two TM servers. How to do this on RedHat is described here.

We decide which of the servers we will have the main one, and which one will be the backup. Let master be TM6_1, backup be TM6_2.

On backup, create a new balancer routing table and routing rules:

[root@tm6_2 ~]echo 101 balancer >> /etc/iproute2/rt_tables
[root@tm6_2 ~]ip rule add from 192.168.1.102 table balancer
[root@tm6_2 ~]ip route add default via 192.168.1.101 table balancer

The above commands work until the system is rebooted. In order for the routes to be preserved after a reboot, you can enter them in /etc/rc.d/rc.local, but better through the settings file /etc/sysconfig/network-scripts/route-eth1 (note that a different syntax is used here).

Install keepalived on both TM servers. We used rpmfind.net as the distribution source:

[root@tm6_1 ~]#yum install https://rpmfind.net/linux/centos/6.10/os/x86_64/Packages/keepalived-1.2.13-5.el6_6.x86_64.rpm

In the keepalived settings, we assign one of the master servers, the other - backup. Then we set the VIP and services for load balancing. The settings file is usually located here: /etc/keepalived/keepalived.conf.

Settings for TM1 server

vrrp_sync_group VG1 { 
   group { 
      VI_1 
   } 
} 
vrrp_instance VI_1 { 
        state MASTER 
        interface eth0 

        lvs_sync_daemon_inteface eth0 
        virtual_router_id 51 
        priority 151 
        advert_int 1 
        authentication { 
                auth_type PASS 
                auth_pass example 
        } 

        virtual_ipaddress { 
                10.20.20.105 
        } 
}

virtual_server 10.20.20.105 1344 {
    delay_loop 6
    lb_algo wrr 
    lb_kind NAT
    protocol TCP

    real_server 192.168.1.101 1344 {
        weight 1
        TCP_CHECK { 
                connect_timeout 3 
            connect_port 1344
        nb_get_retry 3
        delay_before_retry 3
        }
    }

    real_server 192.168.1.102 1344 {
        weight 1
        TCP_CHECK { 
                connect_timeout 3 
            connect_port 1344
        nb_get_retry 3
        delay_before_retry 3
        }
    }
}

virtual_server 10.20.20.105 25 {
    delay_loop 6
    lb_algo wrr 
    lb_kind NAT
    protocol TCP

    real_server 192.168.1.101 25 {
        weight 1
        TCP_CHECK { 
                connect_timeout 3 
            connect_port 25
        nb_get_retry 3
        delay_before_retry 3
        }
    }

    real_server 192.168.1.102 25 {
        weight 1
        TCP_CHECK { 
                connect_timeout 3 
            connect_port 25
        nb_get_retry 3
        delay_before_retry 3
        }
    }
}

virtual_server 10.20.20.105 9100 {
    delay_loop 6
    lb_algo wrr 
    lb_kind NAT
    protocol TCP

    real_server 192.168.1.101 9100 {
        weight 1
        TCP_CHECK { 
                connect_timeout 3 
            connect_port 9100
        nb_get_retry 3
        delay_before_retry 3
        }
    }

    real_server 192.168.1.102 9100 {
        weight 1
        TCP_CHECK { 
                connect_timeout 3 
            connect_port 9100
        nb_get_retry 3
        delay_before_retry 3
        }
    }
}

Settings for TM2 server

vrrp_sync_group VG1 { 
   group { 
      VI_1 
   } 
} 
vrrp_instance VI_1 { 
        state BACKUP 
        interface eth0 

        lvs_sync_daemon_inteface eth0 
        virtual_router_id 51 
        priority 100 
        advert_int 1 
        authentication { 
                auth_type PASS 
                auth_pass example 
        } 

        virtual_ipaddress { 
                10.20.20.105 
        } 
}

Install on master LVS, which will balance traffic. For the second server, it does not make sense to install a balancer, because we have only two servers in the configuration.

[root@tm6_1 ~]##yum install https://rpmfind.net/linux/centos/6.10/os/x86_64/Packages/ipvsadm-1.26-4.el6.x86_64.rpm

The balancer will be controlled by keepalived, which we have already configured.

To complete the picture, let's add keepalived to autostart on both servers:

[root@tm6_1 ~]#chkconfig keepalived on

Conclusion

Checking the results

Run keepalived on both servers:

service keepalived start

Checking the Availability of a VRRP Virtual Address

Make sure VIP is on master:

Configuring load balancing on InfoWatch Traffic Monitor

And there is no VIP on backup:

Configuring load balancing on InfoWatch Traffic Monitor

Using the ping command, check the availability of the VIP:

Configuring load balancing on InfoWatch Traffic Monitor

Now you can turn off master and run the command again ping.

The result should remain the same, and on the backup we will see the VIP:

Configuring load balancing on InfoWatch Traffic Monitor

Service balancing check

Let's take SMTP as an example. Let's start two connections to 10.20.20.105 at the same time:

telnet 10.20.20.105 25

On master, we should see that both connections are active and connected to different servers:

[root@tm6_1 ~]#watch ipvsadm –Ln

Configuring load balancing on InfoWatch Traffic Monitor

Thus, we have implemented a fault-tolerant configuration of TM services with the installation of a balancer on one of the TM servers. For our system, this reduced the load on TM by half, which made it possible to solve the problem of the lack of horizontal scaling by means of the system.

In most cases, this solution is implemented quickly and at no additional cost, but sometimes there are a number of limitations and difficulties in setting up, for example, when balancing UDP traffic.

Source: habr.com

Add a comment