Migrating from Nginx to Envoy Proxy

Hey Habr! Here is a translation of the post: Migrating from Nginx to Envoy Proxy.

Envoy is a high-performance distributed proxy server (written in C++) designed for individual services and applications, it is also a communication bus and "universal data plane" designed for large "service mesh" microservice architectures. When it was created, solutions to problems that arose during the development of servers such as NGINX, HAProxy, hardware load balancers and cloud load balancers were taken into account. Envoy works together with every application and abstracts the network to provide common functionality regardless of platform. When all service traffic in the infrastructure passes through the Envoy grid, it becomes easy to visualize problem areas through consistent observability, tuning overall performance, and adding core functionality in a specific location.

Capabilities

  • Out-of-process architecture: envoy is a self-contained, high-performance server that takes up little RAM. It works in conjunction with any application language or framework.
  • http/2 and grpc support: envoy has top-notch http/2 and grpc support for both incoming and outgoing connections. It is a transparent proxy from http/1.1 to http/2.
  • Advanced load balancing: envoy supports advanced load balancing features, including automatic retries, chain termination, global rate limiting, query shadowing, zone local load balancing, etc.
  • Configuration Management API: envoy provides a robust API for dynamically managing its configuration.
  • Observability: Deep observability of L7 traffic, native support for distributed tracing, and observability of mongodb, dynamodb and many other applications.

Step 1 β€” NGINX Config Example

This script uses a specially crafted file nginx.conf, based on the complete example from Wiki. You can view the configuration in the editor by opening nginx.conf

Initial nginx config

user  www www;
pid /var/run/nginx.pid;
worker_processes  2;

events {
  worker_connections   2000;
}

http {
  gzip on;
  gzip_min_length  1100;
  gzip_buffers     4 8k;
  gzip_types       text/plain;

  log_format main      '$remote_addr - $remote_user [$time_local]  '
    '"$request" $status $bytes_sent '
    '"$http_referer" "$http_user_agent" '
    '"$gzip_ratio"';

  log_format download  '$remote_addr - $remote_user [$time_local]  '
    '"$request" $status $bytes_sent '
    '"$http_referer" "$http_user_agent" '
    '"$http_range" "$sent_http_content_range"';

  upstream targetCluster {
    172.18.0.3:80;
    172.18.0.4:80;
  }

  server {
    listen        8080;
    server_name   one.example.com  www.one.example.com;

    access_log   /var/log/nginx.access_log  main;
    error_log  /var/log/nginx.error_log  info;

    location / {
      proxy_pass         http://targetCluster/;
      proxy_redirect     off;

      proxy_set_header   Host             $host;
      proxy_set_header   X-Real-IP        $remote_addr;
    }
  }
}

NGINX configurations typically have three key elements:

  1. Setting up the NGINX server, logging structure, and Gzip functionality. This is defined globally in all cases.
  2. Configuring NGINX to accept requests to the host one.example.com on port 8080.
  3. Target location setup, how to handle traffic for different parts of the URL.

Not all configuration will apply to Envoy Proxy and you do not need to configure some settings. Envoy Proxy has four key types, which support the underlying infrastructure offered by NGINX. The kernel is:

  • Listeners: They determine how Envoy Proxy accepts incoming requests. Envoy Proxy currently only supports TCP-based listeners. Once a connection is established, it is passed to the filter set for processing.
  • Filters: They are part of a pipeline architecture that can process incoming and outgoing data. This functionality includes filters such as Gzip, which compresses data before sending it to the client.
  • Routers: They redirect traffic to the desired destination, defined as a cluster.
  • Clusters: They define the endpoint for traffic and configuration options.

We will use these four components to create an Envoy Proxy configuration to match a specific NGINX configuration. The purpose of Envoy is API work and dynamic configuration. In this case, the base configuration will use the static, hard-coded options from NGINX.

Step 2 β€” NGINX Configuration

The first part nginx.conf defines some NGINX internals that must be configured.

Worker Connections

The configuration below determines the number of worker processes and connections. This indicates how NGINX will scale to meet demand.

worker_processes  2;

events {
  worker_connections   2000;
}

Envoy Proxy manages workflows and connections in different ways.

Envoy creates a worker thread for every hardware thread in the system. Each worker thread executes a non-blocking event loop that is responsible for

  1. Listening to each listener (listener)
  2. Accepting new connections
  3. Create a filter set for a connection
  4. Processing all I/O operations during the lifetime of the connection.

All further processing of the connection is handled entirely in the worker thread, including any forwarding behavior.

For each worker thread in Envoy, there is a connection in the pool. So HTTP/2 connection pools only establish one connection for each external host at a time, with four worker threads, there will be four HTTP/2 connections for each external host in a stable state. By keeping everything on a single worker thread, almost all code can be written block-free as if it were single-threaded. If more worker threads are allocated than necessary, this can result in wasted memory, creating a large number of idle connections, and reducing the number of connection returns back to the pool.

For more information, visit Envoy Proxy blog.

HTTP Configuration

The following NGINX configuration block defines HTTP settings such as:

  • What mime types are supported
  • Default timeouts
  • Gzip configuration

You can customize these aspects with filters in Envoy Proxy, which we'll discuss later.

Step 3 β€” Server Configuration

In the HTTP configuration block, the NGINX configuration specifies to listen on port 8080 and respond to incoming requests for domains one.example.com ΠΈ www.one.example.com.

 server {
    listen        8080;
    server_name   one.example.com  www.one.example.com;

Inside Envoy, Listeners manage this.

Envoy listeners

The most important aspect of getting started with Envoy Proxy is defining the listeners. You need to create a configuration file that describes how you want to run the Envoy instance.

The snippet below will create a new listener and bind it to port 8080. The configuration tells Envoy Proxy which ports it should be bound to for incoming requests.

Envoy Proxy uses YAML notation for its configuration. For an introduction to this notation, see here link.

Copy to Editorstatic_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 8080 }

No need to define server_name, as the Envoy Proxy filters will handle this.

Step 4 β€” Location Configuration

When a request arrives at NGINX, the location block determines how to handle and where to route the traffic. In the following fragment, all traffic to the site is transferred to the upstream (translator's note: upstream is usually an application server) cluster named targetCluster. The upstream cluster defines the nodes that should process the request. We will discuss this in the next step.

location / {
    proxy_pass         http://targetCluster/;
    proxy_redirect     off;

    proxy_set_header   Host             $host;
    proxy_set_header   X-Real-IP        $remote_addr;
}

At Envoy, this is what Filters does.

Envoy Filters

For a static configuration, filters determine how incoming requests are processed. In this case, we set filters that match server_names in the previous step. When incoming requests come in that match certain domains and routes, traffic is routed to the cluster. This is the equivalent of the NGINX upstream configuration.

Copy to Editor    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          codec_type: auto
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
                - "one.example.com"
                - "www.one.example.com"
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: targetCluster
          http_filters:
          - name: envoy.router

First name envoy.http_connection_manager is a built-in filter in Envoy Proxy. Other filters include Redis, Mongo, TCP. You can find the complete list at documentation.

For more information on other load balancing policies, visit Envoy Documentation.

Step 5 - Proxy and Upstream Configuration

In NGINX, the upstream configuration defines a set of target servers that will process the traffic. In this case, two clusters have been assigned.

  upstream targetCluster {
    172.18.0.3:80;
    172.18.0.4:80;
  }

In Envoy this is managed by clusters.

Envoy Clusters

The upstream equivalent is defined as clusters. In this case, the hosts that will serve the traffic have been defined. How hosts are accessed, such as timeout, is defined as a cluster configuration. This allows finer control over the granularity of aspects such as latency and load balancing.

Copy to Editor  clusters:
  - name: targetCluster
    connect_timeout: 0.25s
    type: STRICT_DNS
    dns_lookup_family: V4_ONLY
    lb_policy: ROUND_ROBIN
    hosts: [
      { socket_address: { address: 172.18.0.3, port_value: 80 }},
      { socket_address: { address: 172.18.0.4, port_value: 80 }}
    ]

When using service discovery STRICT_DNS Envoy will continuously and asynchronously resolve the specified DNS targets. Each returned IP address from the DNS result will be considered an explicit host in the upstream cluster. This means that if a query returns two IP addresses, Envoy will assume that there are two hosts in the cluster and both should be load balanced. If a host is removed from the result, Envoy assumes it no longer exists and will pull traffic from any existing connection pools.

For more information see Envoy proxy documentation.

Step 6 β€” Log Access and Errors

The final configuration is registration. Instead of pushing error logs to disk, Envoy Proxy takes a cloud-based approach. All application logs are output to stdout ΠΈ stderr.

When users make a request, access logs are optional and disabled by default. To enable access logs for HTTP requests, enable the configuration access_log for the HTTP connection manager. The path can either be a device such as stdout, or a file on disk, depending on your requirements.

The following configuration will redirect all access logs to stdout (translator's note - stdout is required to use envoy inside docker. If used without docker, then replace /dev/stdout with the path to the usual log file). Copy the snippet to the configuration section for the connection manager:

Copy to Clipboardaccess_log:
- name: envoy.file_access_log
  config:
    path: "/dev/stdout"

The results should look like this:

      - name: envoy.http_connection_manager
        config:
          codec_type: auto
          stat_prefix: ingress_http
          access_log:
          - name: envoy.file_access_log
            config:
              path: "/dev/stdout"
          route_config:

By default, Envoy has a format string that includes the details of the HTTP request:

[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%" %RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% "%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%" "%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"n

The result of this format string is:

[2018-11-23T04:51:00.281Z] "GET / HTTP/1.1" 200 - 0 58 4 1 "-" "curl/7.47.0" "f21ebd42-6770-4aa5-88d4-e56118165a7d" "one.example.com" "172.18.0.4:80"

The output content can be customized by setting the format field. For example:

access_log:
- name: envoy.file_access_log
  config:
    path: "/dev/stdout"
    format: "[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%" %RESPONSE_CODE% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% "%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"n"

The log line can also be output in JSON format by setting the field json_format. For example:

access_log:
- name: envoy.file_access_log
  config:
    path: "/dev/stdout"
    json_format: {"protocol": "%PROTOCOL%", "duration": "%DURATION%", "request_method": "%REQ(:METHOD)%"}

For more information on the envoy registration methodology, visit

https://www.envoyproxy.io/docs/envoy/latest/configuration/access_log#config-access-log-format-dictionaries

Logging is not the only way to get a feel for working with Envoy Proxy. It has advanced tracing and metrics built in. You can find out more at trace documentation or via Interactive Routing Script.

Step 7 - Launch

You have now migrated the configuration from NGINX to Envoy Proxy. The last step is to launch an Envoy Proxy instance to test it.

Run as user

At the top of the NGINX configuration line user www www; indicates that NGINX is running as a low-privileged user to improve security.

Envoy Proxy uses a cloud-based approach to managing who owns a process. When we run Envoy Proxy through a container, we can specify a low privilege user.

Run Envoy Proxy

The command below will launch Envoy Proxy through a Docker container on the host. This command gives Envoy the ability to listen for incoming requests on port 80. However, as specified in the listener configuration, Envoy Proxy listens for incoming traffic on port 8080. This allows the process to run as a user with low privileges.

docker run --name proxy1 -p 80:8080 --user 1000:1000 -v /root/envoy.yaml:/etc/envoy/envoy.yaml envoyproxy/envoy

The test is

With the proxy running, tests can now be made and processed. The following cURL command issues a request with the host header defined in the proxy configuration.

curl -H "Host: one.example.com" localhost -i

HTTP request will result in an error 503. This is because upstream connections are not working and are not available. Thus, Envoy Proxy has no available target destinations for the request. The following command will start a series of HTTP services that match the configuration defined for Envoy.

docker run -d katacoda/docker-http-server; docker run -d katacoda/docker-http-server;

With the services available, Envoy can successfully proxy traffic to its destination.

curl -H "Host: one.example.com" localhost -i

You should see a response indicating which Docker container handled the request. In the Envoy Proxy logs, you should also see the access string printed out.

Additional HTTP Response Headers (HTTP Response)

You will see additional HTTP headers in the response headers of a valid request. The header shows the time that the upstream host spent processing the request. Expressed in milliseconds. This is useful if the client wants to determine service time versus network latency.

x-envoy-upstream-service-time: 0
server: envoy

Final config

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 8080 }
    filter_chains:
    - filters:
      - name: envoy.http_connection_manager
        config:
          codec_type: auto
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: backend
              domains:
                - "one.example.com"
                - "www.one.example.com"
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: targetCluster
          http_filters:
          - name: envoy.router
          clusters:
  - name: targetCluster
    connect_timeout: 0.25s
    type: STRICT_DNS
    dns_lookup_family: V4_ONLY
    lb_policy: ROUND_ROBIN
    hosts: [
      { socket_address: { address: 172.18.0.3, port_value: 80 }},
      { socket_address: { address: 172.18.0.4, port_value: 80 }}
    ]

admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 0.0.0.0, port_value: 9090 }

Additional information from the translator

Instructions for installing Envoy Proxy can be found on the website https://www.getenvoy.io/

There is no systemd service config in rpm by default.

Add systemd service config to /etc/systemd/system/envoy.service:

[Unit]
Description=Envoy Proxy
Documentation=https://www.envoyproxy.io/
After=network-online.target
Requires=envoy-auth-server.service
Wants=nginx.service

[Service]
User=root
Restart=on-failure
ExecStart=/usr/bin/envoy --config-path /etc/envoy/config.yaml
[Install]
WantedBy=multi-user.target

You need to create the /etc/envoy/ directory and put the config.yaml config there.

There is a telegram chat for envoy proxy: https://t.me/envoyproxy_ru

Envoy Proxy does not support serving static content. So who can vote for a feature: https://github.com/envoyproxy/envoy/issues/378

Only registered users can participate in the survey. Sign in, you are welcome.

Did this post encourage you to install and test envoy proxy?

  • Yes

  • no

75 users voted. 18 users abstained.

Source: habr.com

Add a comment