Hyper-V NUMA affinity and hyperthreading

This topic explains how to experiment with NUMA pinning and Hyper-Threading Technology for Pexip Infinity Conferencing Node VMs, in order to achieve up to 50% additional capacity. You must be using Hyper-V as part of a Windows Server Datacenter Edition to do this.

If you are taking advantage of hyperthreading to deploy two vCPUs per physical core (i.e. one per logical thread), you must first enable NUMA affinity; if you don't, the Conferencing Node VM will end up spanning multiple NUMA nodes, resulting in a loss of performance.

Affinity does NOT guarantee or reserve resources, it simply forces a VM to use only the socket you define, so mixing Pexip Conferencing Node VMs that are configured with NUMA affinity together with other VMs on the same server is not recommended.

NUMA affinity is not practical in all data center use cases, as it forces a given VM to run on a certain CPU socket (in this example), but is very useful for high-density Pexip deployments with dedicated capacity.

This information is aimed at administrators with a strong understanding of Hyper-V, who have very good control of their VM environment, and who understand the consequences of conducting these changes.

Please ensure you have read and implemented our recommendations in Achieving high density deployments with NUMA before you continue.

Prerequisites

NUMA affinity for Pexip Conferencing Node VMs should only be used if the following conditions apply:

  • You are using Hyper-V as part of a Windows Server Datacenter Edition (the Standard Edition does not have the appropriate configuration options).
  • Live Migration is NOT used. (Using this may result in having two nodes both locked to a single socket, meaning both will be attempting to access the same processor, with neither using the other processor.)
  • You fully understand what you are doing, and you are happy to revert back to the standard settings, if requested by Pexip support, to investigate any potential issues that may result.

Example server without NUMA affinity - allows for more mobility of VMs

Example server with NUMA affinity - taking advantage of hyperthreading to gain 30-50% more capacity per server

Example hardware

In the example given below, we are using a SuperMicro SuperServer with dual Intel Xeon E5-2680-v3 processors, 64GB RAM, and 2 x 1TB hard drives.

On this server:

  • we deploy one Conferencing Node VM per processor/socket, so two Conferencing Nodes in total
  • we disable NUMA spanning, so each Conferencing Node VM runs on a single NUMA node/processor/socket
  • each processor has 12 physical cores
  • we use hyperthreading to deploy 2 vCPUs per physical core
  • this gives us 24 vCPUs / 24 threads per Conferencing Node
  • therefore we get 48vCPUs / 24 threads in total on the server.

Disabling NUMA spanning on the server

Firstly, we must disable NUMA spanning on the server. To do this:

  1. From within Hyper-V Manager, right-click on the server and select Hyper-V Settings...:

  2. From the Server section, select NUMA Spanning and disable Allow virtual machines to span physical NUMA nodes. This ensures that all processing will remain on a single processor within the server:

Disable NUMA spanning on the VM

Next we need to ensure the Conferencing Node VMs have the correct settings too, and do not span multiple processors.

To do this:

  1. From within Hyper-V, select the Conferencing Node VM, and then select Settings > Hardware > Processor > NUMA.
  2. Confirm that only 1 NUMA node and 1 socket are in use by each Conferencing Node VM:

Starting the Virtual Machine

After the NUMA settings have been changed, you can start up each of the Conferencing Node VMs:

Viewing performance and checking for warnings

Every time a Conferencing Node is started up or rebooted, the Pexip Infinity Management Node will perform a sampling of the system to understand what capabilities it has. To view this information, go to the administrator log (History & Logs > Administrator log) and search for "sampling".

A successful run of the above example should return something like:

2015-04-05T18:25:40.390+00:00 softlayer-lon02-cnf02 2015-04-05 18:25:40,389 Level="INFO" Name="administrator.system" Message="Performance sampling finished" Detail="FULLHD=17 HD=33 SD=74 Audio=296"

An unsuccessful run, where Hyper-V has split the Conferencing Node over multiple NUMA nodes, would return the following warning in addition to the result of the performance sampling:

2015-04-06T17:42:17.084+00:00 softlayer-lon02-cnf02 2015-04-06 17:42:17,083 Level="WARNING" Name="administrator.system" Message="Multiple numa nodes detected during sampling" Detail="We strongly recommend that a Pexip Infinity Conferencing Node is deployed on a single NUMA node"

2015-04-06T17:42:17.087+00:00 softlayer-lon02-cnf02 2015-04-06 17:42:17,086 Level="INFO" Name="administrator.system" Message="Performance sampling finished" Detail="HD=21 SD=42 Audio=168"

Moving VMs

When moving Conferencing Node VMs between hosts, you must ensure that the new host has at least the same number of cores. You must also remember to disable NUMA spanning on the new host.

BIOS settings

Ensure all BIOS settings pertaining to power saving are set to maximize performance rather than preserve energy. (Setting these to an energy-preserving or balanced mode may impact transcoding capacity, thus reducing the total number of HD calls that can be provided.) While this setting will use slightly more power, the alternative is to add another server in order to achieve the increase in capacity, and that would in total consume more than one server running in high performance mode.

The actual settings will depend on the hardware vendor; see BIOS performance settings for some examples.