In March this year Red Hat released Advanced Cluster Management 2.10. Part of the preparation for this release included scale testing ACM. In this article we’ll take a look at an overview of how this scale testing is carried out and what results came from the testing. Our overall goal for this testing is to ensure ACM does not regress in performance when managing 3500 Single Node Openshift clusters in a disconnected IPv6 environment with network latency and bandwidth limits enabled. This testing is designed to simulate an environment that would represent how our telco customers would use ACM. Simulation is key here, 3500 plus servers to run this simulation would be an excessive amount of hardware to manage. To reduce this burden, all of the SNOs are virtualized. Virtualizing them reduces the physical hardware requirement from more than 3500 physical nodes to just 140 nodes.
The test environment includes a jump host, three OpenShift control-plane hub nodes, and the rest of the machines are hypervisors to host the virtualized managed SNO clusters. The jump host provides access to the disconnected IPv6 environment and runs infrastructure services such as the onprem assisted-service that installed the hub cluster, an http server, an image registry, coredns, and GOGs: a self-hosted git instance. The hub cluster is a compact OpenShift cluster consisting of three nodes that share control-plane and worker roles.. Hypervisor machines are vanilla libvirt machines running Red Hat Enterprise Linux. Virtual machines are managed directly through libvirt and have virtual redfish access provided by sushy-tools.
Test execution happens using ZTP and GitOps in batches of 500 SNOs per hour or 40 SNOs per 5 minutes. Test rates are 500/1hr and 40/5m to simulate two extremes of a workload: one that has large spikes and one that is small but constantly running. Each batch of managed nodes is first provisioned and managed by ACM, then has a set of policies applied to them that simulates the set of policies that would represent our telco customer’s target environment. Once the managed nodes are provisioned, managed, and have telco policies applied to them we collect data that shows the time it takes for each of these steps to happen on each managed cluster and the resources used across all the clusters. Using this data we are able to both see the performance improvements of ACM 2.10 relative to past releases and test for regressions in ACM scalability. A simple example of the results that are collected is the deployment statistics. Here are two graphs that show the 500/1hr and 40/5m managed cluster deployments.
For the ACM 2.10 large scale testing, there were twenty two completed test runs. These runs take over an hour, at minimum, to prepare and eight to twelve hours to complete. Upgrade runs take an additional 2 business days or more to both run the managed cluster upgrades and gather upgrade results. (It is important to note that the length of time it takes for our tests to complete are not in any way representative of what a customer will experience with ACM. The time necessary to run our tests include a significant amount of effort to maintain and prepare our simulation environment and gather results after the tests are completed. This extra overhead is not part of a customer’s requirements to install and/or maintain ACM.) Nineteen of these runs were non-upgrade runs that only provision the managed nodes and apply policies intending node policy compliance. Three of these runs included managed node upgrades. Overall results indicate that ACM 2.10 can deploy, manage, and configure 3500+ SNOs with a less than 0.7% rate of failure. Further, hub cluster performance running ACM 2.10 on OCP 4.15 is better than ACM 2.9 on OCP 4.14. There was a general reduction in CPU and memory utilization from ACM 2.9 to 2.10 and the time that managed clusters took to install and become policy compliant were comparable from 2.9 to 2.10.
This testing is an important part of our release cycle. It brings confidence to our company and our customers in the value provided by our products. The results we provide to our customers help them to make critical decisions about the infrastructure they’re maintaining to run their business. We are very pleased with the results of the ACM 2.10 large scale testing that has been completed. ACM 2.11 large scale testing is underway to ensure that Advanced Cluster Management is a product Red Hat’s customers can continue to rely on for future versions to come.