Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:

Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:

We discussed the methodology in the first part article, in this one we test HTTPS, but in more realistic scenarios. For testing, a Let's Encrypt certificate was obtained, Brotli compression was enabled at 11.

This time we will try to reproduce the scenario of deploying the server on a VDS or as a virtual machine on a host with a typical processor. For this, a limit was set in:

  • 25% - What is in terms of frequency ~ 1350 MHz
  • 35% -1890MHz
  • 41% - 2214MHz
  • 65% - 3510MHz

The number of one-time connections was reduced from 500 to 1, 3, 5, 7 and 9,

Results:

Delays:

TTFB was specifically submitted as a separate test, because HTTPD Tools creates, as it were, a new user for each individual request. This test is still quite out of touch with reality, because the user will still click on a couple of pages, and in reality TTFP will play the main role.

Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
The first, in general, the very first request after the first start of the IIS virtual machine fulfills an average of 120 ms.

Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
All subsequent requests show TTFP at 1.5ms. Apache and Nginx are behind in this. Personally, the author considers this test the most revealing and would choose the winner only on it.
The result is not surprising since IIS caches already compressed static content and does not recompress it every time it is accessed.

Time spent per client

To evaluate the performance, a test with 1 one-time connection is sufficient.
For example, IIS completed a test of 5000 users in 40 seconds, which is 123 requests per second.

The graphs below show the time until the full transfer of site content. This is the proportion of requests completed at a particular time. In our case, 80% of all requests were processed in 8ms on IIS and in 4.5ms on Apache and Nginx, and 8% of all requests on Apache and Nginx were completed in an interval of up to 98 milliseconds.

Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
Time for which 5000 requests were processed:

Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
Time for which 5000 requests were processed:

Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
If you have a virtual machine with 3.5GHz CPU and 8 cores, then choose what you want. All web servers are very similar in this test. We will talk about which web server to choose for each host below.

When it comes to a slightly more realistic situation, all web servers go toe to toe.

Throughput:

Graph of delays from the number of simultaneous connections. Straighter and lower - better. The last 2% were left out of the charts because they would make them unreadable.

Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
Now consider the option where the server is hosted on a shared hosting. Let's take 4 cores at 2.2GHz and one core at 1.8GHz.

Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:

How to scale

If you've ever seen what the I-V curves of vacuum triodes, pentodes, and so on look like, these graphs will be familiar to you. That's what we're trying to capture - saturation. The limit, when how many cores do not throw, performance growth will not be noticeable.

Previously, the whole challenge was to process 98% of requests while having the lowest latency across all requests, keeping the curve as flat as possible. Now, by constructing another curve, we find the optimal operating point for each of the servers.

To do this, take the Requests per second (RPR) indicator. The horizontal frequency is the frequency, the vertical is the number of requests processed per second, the lines are the number of cores.

Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
How well Nginx handles requests one by one is shown to be correlated. 8 cores perform better in such testing.

Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
This graph clearly shows how much better (not much) Nginx works on a single core. If you have Nginx, consider reducing the number of cores to one if you are only hosting static.

Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
Although IIS has the lowest TTFB according to DevTools in Chrome, it manages to lose both Nginx and Apache in a serious fight with the Apache Foundation stress test.

Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:
All the curvature of the graphs is reproduced iron.

Kind of a conclusion:

Yes, Apache iron works worse on 1 and 8 cores, and on 4 it works a little better.

Yes, Nginx on 8 cores processes requests one after another better, on 1 and 4 cores, and works worse when there are a lot of connections.

Yes, IIS prefers 4 cores for multi-threaded workload and prefers 8 cores for single-threaded workload. In the end, IIS turned out to be slightly faster than everyone else on 8 cores under high load, although all servers were on par.

This is not a measurement error, the error here is no more than + -1ms. in delays and no more than +- 2-3 requests per second for RPR.

The results when 8 cores perform worse are not at all surprising, many cores and SMT/Hyperthreading greatly degrade performance if we have a time frame in which we must complete the entire pipeline.

Battle of WEB servers. Part 2 - Realistic HTTPS Scenario:

Source: habr.com

Add a comment