You are here: Installation > Server design guidelines > Detailed hardware requirements

Detailed server hardware requirements

This topic describes the server hardware requirements for the Pexip Infinity platform:

Host server hardware requirements

The following table lists the recommended hardware requirements for the Management Node and Conferencing Node host servers.

  Management Node Conferencing Node
Server manufacturer Any Any

Processor make*

Any

Intel Xeon E5-2600 series (Haswell architecture) or similar Xeon processors from 2012 or later

Processor instruction set Any AVX2
Processor architecture 64-bit 64-bit
Processor speed 2.0 GHz 2.3 GHz or faster
No. of physical cores*
2 10-12 cores per socket
Processor cache no minimum 20 MB or greater

(2.5 MB L3 cache per core)

Total RAM* 4 GB 1 GB per core
RAM makeup Any All channels must be populated with a DIMM.
Hardware allocation The host server must not be over-committed in terms of either RAM or CPU. In other words, the Management Node and Conferencing Nodes each must have dedicated access to their own RAM and CPU cores.
Storage space required 100 GB 50 GB per Conferencing Node
GPU No specific hardware cards or GPUs are required.
Network Gigabit Ethernet connectivity from the host server.
Operating System The Pexip Infinity VMs are delivered as VM images (.ova etc) to be run directly on the hypervisor. No OS should be installed.

Hypervisor

  • VMware ESXi 4.1, 5.x or 6.0
  • Microsoft Hyper-V 2012 or later
  • Xen 4.2 or later
  • KVM (Linux kernel 3.10.0 or later, and QEMU 1.5.0 or later)

* This does not include the processor and RAM requirements of the Hypervisor.

‡ For VMware platforms, ESXi 6 is required to make full use of the AVX2 instruction set. Note that AVX and SSE4.1 are also supported.

† Sufficient for deployments of up to 30 Conferencing Nodes. For larger deployments, you will need to increase the amount of RAM and number of cores. For more information about our server design guidelines, consult your Pexip support representative.

Capacity

The number of calls (or ports) that can be achieved per server in a Pexip Infinity deployment will depend on a number of things including the specifications of the particular server and the bandwidth of each call.

As a general indication of capacity:

  • When deployed on our recommended hardware (Intel Haswell, 10 cores, 2.3 GHz), Pexip Infinity can connect up to two High Definition 720p30 calls per CPU core. This is based on 1.1 GHz per HD call plus 20% headroom. Capacity for higher speeds can be linearly calculated based on these figures.
  • The same recommended hardware can connect a higher number of lower-resolution calls per CPU core. For example, up to 20 audio-only AAC-LD calls at 64 kbps.
  • Deployments on servers that are older, have slower processors, or have fewer CPUs, will have a lower overall capacity.

Performance considerations

The type of processors and Hypervisors used in your deployment will impact the levels of performance you can achieve. Some known performance considerations are described below.

Intel AVX2 processor instruction set

As from software version 11, Pexip Infinity can make full use of the AVX2 instruction set provided by modern Intel processors. This increases the performance of video encoding and decoding. For VMware platforms, ESXi 6 is required to enable this optimization.

AMD processors

We have observed during internal testing that use of AMD processors results in a reduction of capacity (measured by ports per core) of around 40% when compared to an identically configured Intel platform. This is because current AMD processors do not execute advanced instruction sets at the same speed as Intel processors.

AMD processors older than 2012 may not perform sufficiently and are not recommended for use with the Pexip Infinity platform.

VMware ESXi 4.1

We have observed during internal testing that use of VMware's ESXi 4.1 hypervisor may result in a reduction of performance of approximately 20% (as measured in number of ports per physical core allocated to the Conferencing Node) when compared to VMware ESXi 5.x. This is due to slower pass through execution of advanced processor instruction sets.

Memory configuration

Memory must be distributed on the different memory channels (i.e. 4 channels per socket on the E5-2600).

There must be an equal amount of memory per socket, and all sockets must have all memory channels populated (you do not need to populate all slots in a channel, one DIMM per channel is sufficient). Do not, for example, use two large DIMMs rather than four lower-capacity DIMMs - using only two per socket will result in half the memory bandwidth, since the memory interface is designed to read up from four DIMMs at the same time in parallel.

Example

E5-2600 dual socket system:

  • Each socket has 4 channels
  • All 4 channels must be populated with a DIMM
  • Both sockets must have the same configuration

Therefore for a dual socket E5-2600 you need 8 identical memory DIMMs.