VMware NSX for the little ones. Part 5: Set up a load balancer

VMware NSX for the little ones. Part 5: Set up a load balancer

Part one. Introductory
Part two. Configuring Firewall and NAT Rules
Part three. Configuring DHCP
Part four. Routing setup

Last time we talked about the capabilities of NSX Edge in terms of static and dynamic routing, and today we will deal with the balancer.
Before proceeding with the setup, I would like to briefly recall the main types of balancing.

Theory

All today's payload balancing solutions are most often divided into two categories: balancing at the fourth (transport) and seventh (application) levels of the model OSI. The OSI model is not the best reference point for balancing methods. For example, if an L4 balancer also supports TLS termination, does it then become an L7 balancer? But what is, is.

  • Balancer L4 most often it is a middle proxy standing between the client and a set of available backends, which terminates TCP connections (that is, independently responds to SYN), selects a backend and initiates a new TCP session in its direction, sending SYN on its own. This type is one of the basic ones, other options are possible.
  • Balancer L7 distributes traffic across available backends more "sophisticated" than the L4 balancer does. It can decide which backend to choose based on, for example, the content of the HTTP message (URL, cookie, etc.).

Regardless of the type, the balancer can support the following functions:

  • Service discovery is the process of determining the set of available backends (Static, DNS, Consul, Etcd, etc.).
  • Checking the health of the detected backends (active "ping" of the backend using an HTTP request, passive detection of problems in TCP connections, the presence of several consecutive 503 HTTP-code responses, etc.).
  • Balancing itself (round robin, random selection, source IP hash, URI).
  • TLS termination and certificate verification.
  • Security related options (authentication, DoS prevention, rate limiting) and more.

NSX Edge offers support for two balancer deployment modes:

Proxy mode, or one-arm. In this mode, NSX Edge uses its IP address as the source address when sending a request to one of the backends. Thus, the balancer simultaneously performs the functions of Source and Destination NAT. The backend sees all traffic as coming from the load balancer and responds directly to it. In such a scheme, the balancer must be in the same network segment as the internal servers.

Here’s how it happens:
1. The user sends a request to the VIP address (balancer address) that is configured on the Edge.
2. Edge selects one of the backends and performs destination NAT, replacing the VIP address with the address of the selected backend.
3. Edge performs source NAT, replacing the requesting user's address with its own.
4. The packet is sent to the selected backend.
5. The backend does not respond directly to the user, but to Edge, since the user's original address has been changed to the address of the balancer.
6. Edge sends the server's response to the user.
Scheme below.
VMware NSX for the little ones. Part 5: Set up a load balancer

Transparent, or inline, mode. In this scenario, the balancer has interfaces on the internal and external networks. At the same time, there is no direct access to the internal network from the external one. The built-in load balancer acts as a NAT gateway for virtual machines on the internal network.

The mechanism is as follows:
1. The user sends a request to the VIP address (balancer address) that is configured on the Edge.
2. Edge selects one of the backends and performs destination NAT, replacing the VIP address with the address of the selected backend.
3. The packet is sent to the selected backend.
4. The backend receives a request with the user's original address (source NAT was not performed) and responds directly to it.
5. The traffic is again accepted by the load balancer, since in the inline scheme it usually acts as the default gateway for the server farm.
6. Edge performs source NAT to send traffic to the user using its VIP as the source IP address.
Scheme below.
VMware NSX for the little ones. Part 5: Set up a load balancer

Practice

On my test bench, 3 servers are configured with Apache, which is configured to work over HTTPS. Edge will balance HTTPS requests in a round robin fashion, proxying each new request to a new server.
Let's get started.

Generating an SSL certificate that NSX Edge will use
You can import a valid CA certificate or use a self-signed one. For this test, I'll use a self-signed one.

  1. In the vCloud Director interface, go to the Edge services settings.
    VMware NSX for the little ones. Part 5: Set up a load balancer
  2. Go to the Certificates tab. From the list of actions, select the addition of a new CSR.
    VMware NSX for the little ones. Part 5: Set up a load balancer
  3. Fill in the required fields and click Keep.
    VMware NSX for the little ones. Part 5: Set up a load balancer
  4. Select the newly created CSR and select the self-sign CSR option.
    VMware NSX for the little ones. Part 5: Set up a load balancer
  5. Select the validity period of the certificate and click Keep
    VMware NSX for the little ones. Part 5: Set up a load balancer
  6. The self-signed certificate appeared in the list of available ones.
    VMware NSX for the little ones. Part 5: Set up a load balancer

Setting up the Application Profile
Application profiles give you more control over network traffic and make it easy and efficient to manage. They can be used to define behavior for specific types of traffic.

  1. Go to the Load Balancer tab and enable the balancer. The Acceleration enabled option here allows the balancer to use faster L4 balancing instead of L7.
    VMware NSX for the little ones. Part 5: Set up a load balancer
  2. Go to the Application profile tab to set the application profile. Click +.
    VMware NSX for the little ones. Part 5: Set up a load balancer
  3. Set the name of the profile and select the type of traffic for which the profile will be applied. Let me explain some options.
    P - saves and keeps track of session data, for example: which specific server from the pool is servicing the user request. This ensures that user requests are directed to the same pool member for the lifetime of the session or subsequent sessions.
    Enable SSL passthrough – when this option is selected, NSX Edge stops terminating SSL. Instead, termination occurs directly on the servers that are being balanced.
    Insert X-Forwarded-For HTTP header – allows you to determine the source IP address of the client connecting to the web server through the load balancer.
    Enable Pool Side SSL – allows you to specify that the selected pool consists of HTTPS servers.
    VMware NSX for the little ones. Part 5: Set up a load balancer
  4. Since I will be balancing HTTPS traffic, you need to enable Pool Side SSL and select the previously generated certificate in the Virtual Server Certificates -> Service Certificate tab.
    VMware NSX for the little ones. Part 5: Set up a load balancer
  5. Similarly for Pool Certificates β€”> Service Certificate.
    VMware NSX for the little ones. Part 5: Set up a load balancer

We create a pool of servers, the traffic to which will be balanced Pools

  1. Go to the Pools tab. Press +.
    VMware NSX for the little ones. Part 5: Set up a load balancer
  2. We set the name of the pool, select the algorithm (I will use round robin) and the type of monitoring for the health check of the backend. The Transparent option indicates whether the initial source IPs of clients are visible to internal servers.
    • If this option is disabled, traffic for internal servers is sent from the source IP of the balancer.
    • If this option is enabled, internal servers see the source IP of clients. In this configuration, NSX Edge must act as the default gateway to ensure that returned packets go through NSX Edge.

    NSX supports the following balancing algorithms:

    • IP_HASH – server selection based on the results of the hash function for the source and destination IP of each packet.
    • LEASTCONN – balancing of incoming connections, depending on the number already available on a particular server. New connections will be directed to the server with the fewest connections.
    • ROUND_ROBIN – new connections are sent to each server in turn, according to the weight assigned to it.
    • URI - The left side of the URI (before the question mark) is hashed and divided by the total weight of the servers in the pool. The result indicates which server receives the request, ensuring that the request is always directed to the same server, as long as all servers remain available.
    • HTTP HEADER – balancing based on a specific HTTP header that can be specified as a parameter. If the header is missing or has no value, the ROUND_ROBIN algorithm is applied.
    • URL – Each HTTP GET request is searched for the URL parameter specified as an argument. If the parameter is followed by an equals sign and a value, then the value is hashed and divided by the total weight of the running servers. The result indicates which server receives the request. This process is used to keep track of user ids in requests and ensure that the same user id is always sent to the same server, as long as all servers remain available.

    VMware NSX for the little ones. Part 5: Set up a load balancer

  3. In the Members block, click + to add servers to the pool.
    VMware NSX for the little ones. Part 5: Set up a load balancer

    Here you need to specify:

    • server name;
    • server IP address;
    • the port on which the server will receive traffic;
    • port for health check (Monitor healthcheck);
    • weight (Weight) - using this parameter, you can adjust the proportional amount of traffic received for a specific pool member;
    • Max Connections - the maximum number of connections to the server;
    • Min Connections - The minimum number of connections that the server must process before traffic is redirected to the next member of the pool.

    VMware NSX for the little ones. Part 5: Set up a load balancer

    This is what the final pool of three servers looks like.
    VMware NSX for the little ones. Part 5: Set up a load balancer

Adding Virtual Server

  1. Go to the Virtual Servers tab. Press +.
    VMware NSX for the little ones. Part 5: Set up a load balancer
  2. We activate the virtual server using Enable Virtual Server.
    We give it a name, select the previously created Application Profile, Pool and specify the IP address to which the Virtual Server will receive requests from the outside. Specify the HTTPS protocol and port 443.
    Optional parameters here:
    Connection Limit – the maximum number of simultaneous connections that the virtual server can handle;
    Connection Rate Limit (CPS) – the maximum number of new incoming requests per second.
    VMware NSX for the little ones. Part 5: Set up a load balancer

This completes the balancer configuration, you can check its performance. Servers have the simplest configuration that allows you to understand which server from the pool processed the request. During setup, we chose the Round Robin balancing algorithm, and the Weight parameter for each server is equal to one, so each next request will be processed by the next server from the pool.
We enter the external address of the balancer in the browser and see:
VMware NSX for the little ones. Part 5: Set up a load balancer

After refreshing the page, the request will be processed by the following server:
VMware NSX for the little ones. Part 5: Set up a load balancer

And again - to check the third server from the pool:
VMware NSX for the little ones. Part 5: Set up a load balancer

When checking, you can see that the certificate that Edge sends us is the one that we generated at the very beginning.

Checking the balancer status from the Edge gateway console. To do this, enter show service load balancer pool.
VMware NSX for the little ones. Part 5: Set up a load balancer

Configuring Service Monitor to check the status of servers in the pool
With Service Monitor, we can monitor the status of the servers in the backend pool. If the response to a request is not as expected, the server can be removed from the pool so that it does not receive any new requests.
Three verification methods are configured by default:

  • tcp monitor,
  • http monitor,
  • HTTPS monitor.

Let's create a new one.

  1. Go to the Service Monitoring tab, click +.
    VMware NSX for the little ones. Part 5: Set up a load balancer
  2. Choose:
    • a name for the new method;
    • the interval at which requests will be sent,
    • response timeout,
    • monitoring type - HTTPS request using GET method, expected status code - 200(OK) and request URL.
  3. This completes the setup of the new Service Monitor, now we can use it when creating a pool.
    VMware NSX for the little ones. Part 5: Set up a load balancer

Setting up Application Rules

Application Rules are a way to manipulate traffic based on specific triggers. With this tool, we can create advanced load balancing rules that may not be configurable through Application profiles or other services available on the Edge Gateway.

  1. To create a rule, go to the Application Rules tab of the balancer.
    VMware NSX for the little ones. Part 5: Set up a load balancer
  2. Select a name, a script that will use the rule, and click Keep.
    VMware NSX for the little ones. Part 5: Set up a load balancer
  3. After the rule is created, we need to edit the already configured Virtual Server.
    VMware NSX for the little ones. Part 5: Set up a load balancer
  4. In the Advanced tab, add the rule we created.
    VMware NSX for the little ones. Part 5: Set up a load balancer

In the example above, we have enabled tlsv1 support.

A couple more examples:

Redirect traffic to another pool.
With this script, we can redirect traffic to another balancing pool if the main pool is down. For the rule to work, multiple pools must be configured on the balancer and all members of the main pool must be in the down state. You need to specify the pool name, not its ID.

acl pool_down nbsrv(PRIMARY_POOL_NAME) eq 0
use_backend SECONDARY_POOL_NAME if PRIMARY_POOL_NAME

Redirect traffic to an external resource.
Here we are redirecting traffic to an external website if all members of the main pool are down.

acl pool_down nbsrv(NAME_OF_POOL) eq 0
redirect location http://www.example.com if pool_down

Even More Examples here.

That's all about the balancer for me. If you have any questions, ask, I'm ready to answer.

Source: habr.com

Add a comment