Cisco HyperFlex vs. competitors: testing performance

We continue to introduce you to the Cisco HyperFlex hyperconverged system.

In April 2019, Cisco is once again holding a series of demonstrations of the new Cisco HyperFlex hyperconverged solution in the regions of Russia and Kazakhstan. You can sign up for a demonstration through the feedback form by clicking on the link. Join now!

Earlier we published an article about load tests performed by an independent laboratory ESG Lab in 2017. In 2018, the performance of the Cisco HyperFlex solution (version HX 3.0) has improved significantly. In addition, competitive solutions also continue to improve. That is why we are publishing a new, more recent version of the comparative load tests from ESG.

In the summer of 2018, the ESG laboratory conducted a re-comparison of Cisco HyperFlex with competitors. Taking into account the modern trend of using Software-defined solutions, manufacturers of such platforms were also added to the comparative analysis.

Test configurations

As part of testing, HyperFlex was compared with two fully software hyperconverged systems that are installed on standard x86 servers, as well as with one software and hardware solution. Testing was carried out using standard software for hyperconverged systems - HCIBench, which uses the Oracle Vdbench tool and automates the testing process. In particular, HCIBench automatically creates virtual machines, coordinates the load between them, and generates convenient and understandable reports.  

140 virtual machines were created per cluster (35 per cluster node). Each virtual machine used 4 vCPUs, 4 GB RAM. The VM's local disk was 16 GB and an additional 40 GB disk.

The following cluster configurations participated in testing:

  • cluster of four Cisco HyperFlex 220C nodes 1 x 400 GB SSD for cache and 6 x 1.2 TB SAS HDD for data;
  • Competitor Vendor A cluster of four nodes 2 x 400 GB SSD for cache and 4 x 1 TB SATA HDD for data;
  • Competitor Vendor B cluster of four nodes 2 x 400 GB SSD for cache and 12 x 1.2 TB SAS HDD for data;
  • Competitor Vendor C cluster of four nodes 4 x 480 GB SSD for cache and 12 x 900 GB SAS HDD for data.

The processors and RAM of all solutions were identical.

Test for the number of virtual machines

Testing began with a workload designed to emulate a standard OLTP test: Read/Write (RW) 70%/30%, 100% FullRandom with a target of 800 IOPS per virtual machine (VM). The test was run on 140 VMs in each cluster for three to four hours. The goal of the test is to keep write latency on the maximum number of VMs at or below 5 milliseconds.

As a result of the test (see chart below), HyperFlex was the only platform to complete this test with an initial 140 VMs and latencies below 5ms (4,95ms). For each of the other clusters, the test was rerun in order to experimentally fit the number of VMs to the target latency of 5 ms in several iterations.

Vendor A successfully handled 70 VMs with an average response time of 4,65ms.
Vendor B provided the required delays of 5,37ms. only with 36 VMs.
Vendor C was able to handle 48 VMs with a 5,02ms response time

Cisco HyperFlex vs. competitors: testing performance

SQL Server Load Emulation

Next, ESG Lab emulated the load of SQL Server. The test used various block sizes and read/write ratios. The test was also run on 140 virtual machines.

As shown in the figure below, the Cisco HyperFlex cluster nearly doubled Vendor A and B in IOPS, and more than five times Vendor C. The Cisco HyperFlex average response time was 8,2ms. For comparison, Vendor A averaged 30,6ms, Vendor B 12,8ms, and Vendor C 10,33ms.

Cisco HyperFlex vs. competitors: testing performance

An interesting observation was made during all tests. Vendor B showed a significant variation in average performance in IOPS on different VMs. That is, the load was distributed extremely unevenly, some VMs worked with an average value of 1000 IOPS +, and some with a value of 64 IOPS. Cisco HyperFlex in this case looked much more stable, all 140 VMs received an average of 600 IOPS from the storage subsystem, that is, the load between the virtual machines was distributed very evenly.

Cisco HyperFlex vs. competitors: testing performance

It is important to note that this uneven distribution of IOPS across virtual machines for vendor B was observed in each iteration of testing.

In real production, this behavior of the system can be a big problem for administrators, in fact, individual virtual machines randomly start to β€œfreeze” and there is practically no way to control this process. The only, not very good, way to balance the load, when using a solution from vendor B, is to use one or another QoS implementation or load balancing.

Hack and predictor Aviator

Let's think about what is 140 virtual machines for Cisco Hyperflex per 1 physical node versus 70 or less for other solutions? For businesses, this means that to support the same number of applications on Hyperflex, you need 2 times fewer nodes than in competitor solutions, i.e. the final system will be much cheaper. Add in a level of automation for all network, server, and HX Data Platform maintenance operations, and it becomes clear why Cisco Hyperflex solutions are gaining popularity in the marketplace so quickly.

Overall, ESG Labs confirmed that the Cisco HyperFlex HX 3.0 hybrid releases provide higher and more consistent performance than other similar solutions.

At the same time, HyperFlex hybrid clusters also outperformed competitors in terms of IOPS and Latency. Just as important, HyperFlex's performance was delivered with a very well-distributed load throughout the storage.

Recall that you can see the Cisco Hyperflex solution and see its capabilities right now. The system is available for demonstration to everyone:

Source: habr.com

Add a comment