Achieving high density deployments with NUMA
There are many factors that can affect the performance of Virtual Machines (VMs) running on host hardware. One of these is how the VM interacts with NUMA.
This section provides an overview of NUMA and how it applies to Pexip Infinity Conferencing Nodes. It summarizes our recommendations and suggests best practices for maximizing performance.
NUMA stands for non-uniform memory access.
It is an architecture that divides the computer into a number of nodes, each containing one or more processor cores and associated memory. A core can access its local memory faster than it can access the rest of the memory on that machine. In other words, it can access memory allocated to its own NUMA node faster than it can access memory allocated to another NUMA node on the same machine.
The diagram (right) outlines the physical components of a host server and shows the relationship to each NUMA node.
We strongly recommend that a Pexip Infinity Conferencing Node VM is deployed on a single NUMA node in order to avoid the loss of performance incurred when a core accesses memory outside its own node.
In practice, with modern servers, each socket represents a NUMA node. We therefore recommend that:
- one Pexip Infinity Conferencing Node VM is deployed per socket of the host server
- the number of vCPUs that the Conferencing Node VM is configured to use is the same as or less than the number of physical cores available in that socket (unless you are taking advantage of hyperthreading to deploy one vCPU per logical thread - in which case see NUMA affinity and hyperthreading).
This second diagram shows how the components of a Conferencing Node virtual machine relate to the server components and NUMA nodes.
You can deploy smaller Conferencing Nodes over fewer cores/threads than are available in a single socket, but this will reduce capacity.
Deploying a Conferencing Node over more cores (or threads when pinned) than provided by a single socket will cause loss of performance, as and when remote memory is accessed. This must be taken into account when moving Conferencing Node VMs between host servers with different hardware configuration: if an existing VM is moved to a socket that contains fewer cores/threads than the VM is configured to use, the VM will end up spanning two sockets and therefore NUMA nodes, thus impacting performance.
To prevent this occurring, ensure that either:
- you only deploy Conferencing Nodes on servers with a large number of cores per processor
- the number of vCPUs used by each Conferencing Node is the same as (or less than) the number of cores/threads available on each NUMA node of even your smallest hosts.
It is possible to utilize the logical threads of a socket (hyperthreading) to deploy a Conferencing Node VM with two vCPUs per physical core (i.e. one per logical thread), in order to achieve up to 50% additional capacity.
However, if you do this you must ensure that all Conferencing Node VMs are pinned to their respective sockets within the hypervisor (also known as NUMA affinity). If you do not, the Conferencing Node VMs will end up spanning multiple NUMA nodes, resulting in a loss of performance.
Affinity does NOT guarantee or reserve resources, it simply forces a VM to use only the socket you define, so mixing Pexip Conferencing Node VMs that are configured with NUMA affinity together with other VMs on the same server is not recommended.
NUMA affinity is not practical in all data center use cases, as it forces a given VM to run on a certain CPU socket (in this example), but is very useful for high-density Pexip deployments with dedicated capacity.
NUMA affinity for Pexip Conferencing Node VMs should only be used if the following conditions apply:
- The server/blade is used for Pexip Conferencing Node VMs only, and the server will have only one Pexip Conferencing Node VM per CPU socket (or two VMs per server in a dual socket CPU e.g. E5-2600 generation).
- vMotion (VMware) or Live Migration (Hyper-V) is NOT used. (Using these may result in having two nodes both locked to a single socket, meaning both will be attempting to access the same processor, with neither using the other processor.)
- You know what you are doing, and you are happy to revert back to the recommended settings, if requested by Pexip support, to investigate any potential issues that may result.
For instructions on how to achieve NUMA pinning (also known as NUMA affinity) for your particular hypervisor, see:
We are constantly optimizing our use of the host hardware and expect that some of this advice will change in later releases of our product. However our current recommendations are:
- Prefer processors with a high core count.
- Prefer a smaller number of large Conferencing Nodes rather than a larger number of smaller Conferencing Nodes.
- Deploy one Conferencing Node per NUMA node (i.e. per socket).
- Configure one vCPU per physical core on that NUMA node (without hyperthreading and NUMA pinning), or one vCPU per logical thread (with hyperthreading and all VMs pinned to a socket in the hypervisor).
- Populate memory equally across all NUMA nodes on a single host server.
- Do not over-commit resources on hardware hosts.
See Resource allocation case study for examples.