Detailed server hardware requirements

This topic describes the server hardware requirements for the Pexip Infinity platform:

Host server hardware requirements

The following table lists the recommended hardware requirements for the Management Node and Conferencing Node (Proxying Edge Nodes and Transcoding Conferencing Nodes) host servers.

  Management Node Conferencing Node ††
Server manufacturer Any Any

Processor make

(see also Performance considerations)

Any

We recommend 2nd generation Intel Xeon Scalable Processors (Cascade Lake) Gold 62xx or 52xx. We also support Intel Xeon Scalable Processors (Skylake) Gold 61xx generation or E5-2600 v3/v4 Haswell/Broadwell architecture from 2014 or later. Also works with Xeon E5-2600 v1/v2 processors (Sandy Bridge/Ivy Bridge from 2012 or later). AMD processors that support the AVX and AVX2 instruction set are also supported.

Processor instruction set Any AVX2 (AVX or later is supported)
Processor architecture 64-bit 64-bit
Processor speed 2.0 GHz 2.3 GHz or faster
No. of physical cores *
4 10-20 cores per socket
Processor cache no minimum 20 MB or greater
Total RAM * 4 GB

 

1 GB RAM per vCPU, so either:

  • 1 GB RAM per physical core (if deploying 1 vCPU per core), or
  • 2 GB RAM per physical core (if using hyperthreading and NUMA affinity to deploy 2 vCPUs per core).
RAM makeup Any All channels must be populated with a DIMM, see Memory configuration below. (For example, E5-24xx supports 3 DIMMs per socket, E5-26xx supports 4 DIMMs per socket.)
Hardware allocation The host server must not be over-committed in terms of either RAM or CPU. In other words, the Management Node and Conferencing Nodes each must have dedicated access to their own RAM and CPU cores.
Storage space required 100 GB SSD
  • 50 GB minimum per Conferencing Node
  • 500 GB total per server (to allow for snapshots etc.)
Although Pexip Infinity will work with SAS drives, we strongly recommend SSDs for both the Management Node and Conferencing Nodes. General VM processes (such as snapshots and backups) and platform upgrades will be faster with SSDs.
GPU No specific hardware cards or GPUs are required.
Network Gigabit Ethernet connectivity from the host server.
Operating System The Pexip Infinity VMs are delivered as VM images (.ova etc.) to be run directly on the hypervisor. No OS should be installed.

Hypervisor

(see also Performance considerations)

  • VMware ESXi 5.x or 6.x
  • Microsoft Hyper-V 2012 or 2016
  • Xen 4.2 or later
  • KVM (Linux kernel 3.10.0 or later, and QEMU 1.5.0 or later)

* This does not include the processor and RAM requirements of the hypervisor.

† Sufficient for deployments of up to 30 Conferencing Nodes. For larger deployments, you will need to increase the amount of RAM and number of cores. For guidance on Management Node sizing, consult your Pexip authorized support representative or your Pexip Solution Architect.

‡ For VMware platforms, ESXi 6.x is required to make full use of the AVX2 instruction set. Note that AVX or later is required; older instruction sets are not supported.

†† The servers hosting Proxying Edge Nodes do not require as high a specification as those servers hosting Transcoding Conferencing Nodes. This is because proxying nodes are not as processor intensive as transcoding nodes. The minimum functional CPU instruction set for a proxying node is AVX, which was first available in the Sandy Bridge generation. You still need multiple proxying nodes for resilience and capacity. We recommend allocating 4 vCPU and 4 GB RAM (which must both be dedicated resource) to each Proxying Edge Node, with a maximum of 8 vCPU and 8 GB RAM for large or busy deployments.

Capacity

The number of calls (or ports) that can be achieved per server in a Pexip Infinity deployment will depend on a number of things including the specifications of the particular server and the bandwidth of each call. For more information, see Capacity planning.

As a general indication of capacity:

  • When deployed on our recommended hardware (Intel Haswell, 10 cores, 2.3 GHz), Pexip Infinity can connect up to two High Definition 720p30 calls per CPU core. This is based on 1.1 GHz per HD call plus 20% headroom. Capacity for higher speeds can be linearly calculated based on these figures.
  • The same recommended hardware can connect a higher number of lower-resolution calls per CPU core. For example, up to 20 audio-only AAC-LD calls at 64 kbps.
  • Servers that are older, have slower processors, or have fewer CPUs, will have a lower overall capacity. Newer servers with faster processors will have a greater capacity. Use of NUMA affinity and hyperthreading will also significantly increase capacity.

Performance considerations

The type of processors and Hypervisors used in your deployment will impact the levels of performance you can achieve. Some known performance considerations are described below.

Intel AVX2 processor instruction set

As from software version 11, Pexip Infinity can make full use of the AVX2 instruction set provided by modern Intel processors. This increases the performance of video encoding and decoding. For VMware platforms, ESXi 6.x is required to enable this optimization.

The VP9 codec is also available for connections to Conferencing Nodes running the AVX2 or later instruction set. VP9 uses around one third less bandwidth for the same resolution when compared to VP8. Note however that VP9 calls consume around 1.5 times the CPU resource (ports) on the Conferencing Node.

AMD processors

We have observed during internal testing that use of AMD processors results in a reduction of capacity (measured by ports per core) of around 40% when compared to an identically configured Intel platform. This is because current AMD processors do not execute advanced instruction sets at the same speed as Intel processors.

AMD processors older than 2012 may not perform sufficiently and are not recommended for use with the Pexip Infinity platform.

Memory configuration

Memory must be distributed on the different memory channels (i.e. 4 channels per socket on the Xeon E5-2600; 6 channels per socket on the Xeon Gold 61xx series).

There must be an equal amount of memory per socket, and all sockets must have all memory channels populated (you do not need to populate all slots in a channel, one DIMM per channel is sufficient). Do not, for example, use two large DIMMs rather than four lower-capacity DIMMs — using only two per socket will result in half the memory bandwidth, since the memory interface is designed to read up from four DIMMs at the same time in parallel.

Example - dual socket, 4 channels

Xeon E5-2600 dual socket system:

  • Each socket has 4 channels
  • All 4 channels must be populated with a DIMM
  • Both sockets must have the same configuration

Therefore for a dual socket E5-2600 you need 8 identical memory DIMMs.

Example - dual socket, 6 channels

Xeon Gold 61xx dual socket system:

  • Each socket has 6 channels
  • All 6 channels must be populated with a DIMM
  • Both sockets must have the same configuration

Therefore for a dual socket Gold 61xx you need 12 or 24 identical memory DIMMs.