Detailed server hardware requirements
This topic describes the server hardware requirements for the Pexip Infinity platform:
The following table lists the recommended hardware requirements for the Management Node and Conferencing Node (Proxying Edge Nodes and Transcoding Conferencing Nodes) host servers.
|Management Node||Conferencing Node
(see also Performance considerations)
We recommend 2nd- or 3rd-generation Intel Xeon Scalable Processors (Cascade Lake / Cooper Lake) Gold 62xx/63xx or 52xx/53xx. We also support Intel Xeon Scalable Processors (Skylake) Gold 61xx generation or E5-2600 v3/v4 Haswell/Broadwell architecture from 2014 or later. Also works with Xeon E5-2600 v1/v2 processors (Sandy Bridge / Ivy Bridge from 2012 or later). AMD processors that support the AVX and AVX2 instruction set are also supported.
|Processor instruction set||Any||AVX2 or AVX512 (AVX is also supported)
|Processor speed||2.0 GHz||2.3 GHz or faster|
|No. of physical cores
||10-20 cores per socket|
|Processor cache||no minimum||20 MB or greater|
1 GB RAM per vCPU, so either:
|RAM makeup||Any||All channels must be populated with a DIMM, see Memory configuration below. Intel Xeon Scalable series processors support 6 DIMMs per socket and older Xeon E5 series processors support 4 DIMMs per socket.|
|Hardware allocation||The host server must not be over-committed (also referred to as over-subscribing or over-allocation) in terms of either RAM or CPU. In other words, the Management Node and Conferencing Nodes each must have dedicated access to their own RAM and CPU cores.|
|Storage space required||100 GB SSD||
|GPU||No specific hardware cards or GPUs are required.|
|Network||Gigabit Ethernet connectivity from the host server.|
|Operating System||The Pexip Infinity VMs are delivered as VM images (.ova etc.) to be run directly on the hypervisor. No OS should be installed.|
(see also Performance considerations)
The number of calls (or ports) that can be achieved per server in a Pexip Infinity deployment will depend on a number of things including the specifications of the particular server and the bandwidth of each call. For more information, see Capacity planning.
As a general indication of capacity:
- When deployed on our recommended hardware (Intel Xeon Gold 6248, 20 cores, 2.5GHz), Pexip Infinity can connect 2 High Definition 720p30 calls per CPU core. This is based on 1.1 GHz per HD call plus 20% headroom. Capacity for higher speeds can be linearly calculated based on these figures.
- The same recommended hardware can connect a higher number of lower-resolution calls per CPU core. For example, up to 36 audio-only AAC-LD calls at 64 kbps.
Servers that are older, have slower processors, or have fewer CPUs, will have a lower overall capacity. Newer servers with faster processors will have a greater capacity. Use of NUMA affinity and hyperthreading will also significantly increase capacity.
The type of processors and Hypervisors used in your deployment will impact the levels of performance you can achieve. Some known performance considerations are described below.
Pexip Infinity can make full use of the AVX2 and AVX512 instruction set provided by modern Intel processors. This increases the performance of video encoding and decoding. For VMware platforms, ESXi 6.x is required to enable this optimization.
The VP9 codec is also available for connections to Conferencing Nodes running the AVX2 or later instruction set. VP9 uses around one third less bandwidth for the same resolution when compared to VP8. Note however that VP9 calls consume around 1.25 times the CPU resource (ports) on the Conferencing Node.
We have observed during internal testing that use of AMD processors results in a reduction of capacity (measured by ports per core) of around 40% when compared to an identically configured Intel platform. This is because current AMD processors do not execute advanced instruction sets at the same speed as Intel processors.
AMD processors older than 2012 may not perform sufficiently and are not recommended for use with the Pexip Infinity platform.
Memory must be distributed on the different memory channels (i.e. 6 channels per socket on the Xeon Scalable series, and 4 channels per socket on the Xeon E5-2600).
There must be an equal amount of memory per socket, and all sockets must have all memory channels populated (you do not need to populate all slots in a channel, one DIMM per channel is sufficient). Do not, for example, use two large DIMMs rather than four lower-capacity DIMMs — using only two per socket will result in half the memory bandwidth, since the memory interface is designed to read up from four DIMMs at the same time in parallel.
Intel Xeon Scalable series dual socket system:
- Each socket has 6 channels
- All 6 channels must be populated with a DIMM
- Both sockets must have the same configuration
Therefore for a dual socket Gold 61xx you need 12 identical memory DIMMs.