You are here: Installation > Server design guidelines > High density deployments with NUMA

Achieving high density deployments with NUMA

There are many factors that can affect the performance of Virtual Machines (VMs) running on host hardware. One of these is how the VM interacts with NUMA.

This section provides an overview of NUMA and how it applies to Pexip Infinity Conferencing Nodes. It summarizes our recommendations and suggests best practices for maximizing performance.

About NUMA

NUMA stands for non-uniform memory access. It is an architecture that divides the computer into a number of nodes, each containing one or more processor cores and associated memory. A core can access its local memory faster than it can access the rest of the memory on that machine. In other words, it can access memory allocated to its own NUMA node faster than it can access memory allocated to another NUMA node on the same machine.

Conferencing Nodes and NUMA nodes

We strongly recommend that a Pexip Infinity Conferencing Node is deployed on a single NUMA node in order to avoid the loss of performance incurred when a core accesses memory outside its own node.

In practice, with modern servers, each socket represents a NUMA node. We therefore recommend that:

  • one Pexip Infinity Conferencing Node is deployed per socket of the host server, and
  • the number of vCPUs that the Conferencing Node is configured to use is the same as or less than the number of cores available in that socket.

You can deploy smaller Conferencing Nodes over fewer cores than are available in a single socket, but this will reduce capacity.

Deploying a Conferencing Node over more cores than provided by a single socket will cause loss of performance, as and when remote memory is accessed. This must be taken into account when moving Conferencing Node VMs between host servers with different hardware configuration: if an existing VM is moved to a socket that contains fewer cores than the VM is configured to use, the VM will end up spanning two sockets and therefore NUMA nodes, thus impacting performance.

To prevent this occurring, ensure that either:

  • you deploy Conferencing Nodes only on servers with large number of cores per processor
  • the number of cores used by each Conferencing Node is the same as (or less than) the number of cores available on each NUMA node of even your smallest hosts.

VMware and NUMA

As well as the physical restrictions discussed above, the hypervisor can also impose restrictions. VMware provides virtual NUMA nodes on VMs that are configured with more than 8 CPUs. This default value can be altered by setting numa.vcpu.min in the VM's configuration file.

Summary of deployment recommendations

We are constantly optimizing our use of the hardware and expect that some of this advice will change in later releases of our product. However our current recommendations are:

  • Prefer processors with a high core count.
  • Prefer a smaller number of large Conferencing Nodes rather than a larger number of smaller Conferencing Nodes.
  • Deploy one Conferencing Node per NUMA node (i.e. per socket)
  • Configure one vCPU per core on that NUMA node.
  • Populate memory equally across all NUMA nodes on a single host server.
  • Do not over-commit resources on hardware hosts.