Server design recommendations

This section summarizes the terminology, recommended specifications and deployment guidelines for servers hosting the Pexip Infinity platform. These apply to both on-premises hardware and cloud deployments.


The table below provides descriptions for the terms used in this guide, in the context of a Pexip Infinity deployment.

Term Description
Processor The hardware within a computer that carries out the basic computing functions. It can consist of multiple cores.

One single physical processing unit. An Intel Xeon Scalable processor typically has between 8 and 32 cores, although both larger and smaller variants are available.

Socket The socket on the motherboard where one processor is installed.
The hardware that stores data which is accessed by the processor cores while executing programs. RAM is supplied in DIMMs (Dual In-line Memory Modules).
Channel A memory channel uses a dedicated interface to the processor, and can typically support up to 2 DIMMs. An Intel Xeon Scalable series processor has 6-8 memory channels, older processors have fewer channels.
Virtual CPU (vCPU)

The VM's understanding of how many CPU cores it requires. Each vCPU appears as a single CPU core to the guest operating system.

When configuring a Conferencing Node, you are asked to enter the number of virtual CPUs to assign to it. We recommend no more than one virtual CPU per physical core, unless you are making use of CPUs that support Hyper-Threading — see NUMA affinity and hyperthreading for more details.

NUMA node The combination of a processor (consisting of one or more cores) and its attached memory.

Management Node

Recommended host server specifications

  • minimum 4 vCPUs* (most modern processors will suffice)
  • minimum 4 GB RAM* (minimum 1 GB RAM for each Management Node vCPU)
  • 100 GB SSD storage
  • The Pexip Infinity VMs are delivered as VM images (.ova etc.) to be run directly on the hypervisor. No OS should be installed.

* Sufficient for typical deployments of up to 30 Conferencing Nodes. For deployments with more than 30 Conferencing Nodes, you will need to increase the number of cores and the amount of RAM on the Management Node. Please contact your Pexip authorized support representative or your Pexip Solution Architect for guidance on Management Node sizing specific to your environment.

Management Node performance considerations

There are a number of factors that can impact the performance of the Management Node, and these should be taken into consideration alongside the recommended specifications described above when determining the size of the Management Node in your deployment.

  • Individual cores can vary significantly in capability.
  • Large numbers of roughly simultaneous join and leave events will increase the load on the Management Node, when compared to the same number of events spread over a broader time range.
  • Different Pexip Infinity features will place different loads on the Management Node, so the overall load will be impacted by the features you have implemented and the frequency with which they are used.
  • Heavy or frequent use of the management API will significantly increase the load on the Management Node.
  • Multiple Live view sessions will increase the load on the Management Node.

Transcoding Conferencing Nodes

Below are our general recommendations for Transcoding Conferencing Node servers. For some specific examples, see Example Conferencing Node server configurations.

Recommended host server specifications

  • We recommend 3rd- or 4th-generation Intel Xeon Scalable Series processors (Ice Lake / Sapphire Rapids) Gold 63xx/64xx for Transcoding Conferencing Nodes.

    • Earlier Intel Xeon Scalable Series processors and Intel Xeon E5/E7-v3 and -v4 series processors are also supported where newer hardware is not available. Machines based on these architectures will work well for Management and Proxying Edge nodes, we recommend prioritizing the newest hardware for transcoding nodes.
    • Other x86-64 processors from Intel and AMD that support at least the AVX instruction set can be used but are not recommended. Some features are only available on underlying hardware that supports at least the AVX2 instruction set.
  • 2.6 GHz (or faster) base clock speed if using Hyper-Threading on 3rd-generation Intel Xeon Scalable Series (Ice Lake) processors or newer.

    • 2.8 GHz+ for older Intel Xeon processors where Hyper-Threading is in use
    • 2.3 GHz+ where Hyper-Threading is not in use
  • Minimum 4 vCPU per node
  • Maximum 48 vCPU per node, i.e. 24 cores if using Hyper-Threading

    • Higher core-counts are possible on fast processors: up to 56 vCPU has been tested successfully
    • Slow (under 2.3GHz) processors may require lower core counts
  • 1 GB RAM for each vCPU that is allocated to the Conferencing Node

    • Each memory channel should be populated:

      • Intel Xeon Scalable Series processors have 6-8 channels
      • Intel Xeon E5/E7 series processors have 4 channels
      • AMD 3rd-Gen EYPC (Rome/Milan) processors have 8 memory channels
      • AMD 4th-Gen EPYC (Genoa/Bergamo/Siena) processors have 12 memory channels
    • For most bus speeds, 16 GB is the smallest DIMM that is available, so it is often necessary to install more than 1 GB of RAM per vCPU to populate all the memory channels.
  • CPU and RAM must be dedicated
  • Populate all memory channels

    • For small (up to 12vCPU) nodes, populating 4 memory channels will be sufficient provided there is nothing else running on the socket
  • Storage:

    • 500 GB total per server (to allow for snapshots etc.), including:
    • 50 GB minimum per Conferencing Node
    • SSD recommended
    • RAID 1 mirrored storage (recommended)
  • Hypervisors: VMware ESXi 6.7, 7.0 and 8.0; KVM
  • The Pexip Infinity VMs are delivered as VM images (.ova etc.) to be run directly on the hypervisor. No OS should be installed.

Proxying Edge Nodes

The servers hosting Proxying Edge Nodes do not require as high a specification as those servers hosting Transcoding Conferencing Nodes because they do not process any media.

Recommended host server specifications

  • Any x84-64 processor that supports the AVX instruction set or newer (AVX2, AVX512)
  • 4 vCPU per node
  • 4 GB RAM per node

    • For large or busy systems 8 vCPU / 8 GB RAM can be used, but this must not be exceeded
    • We recommend scaling with multiple Proxying Edge Nodes for redundancy
  • CPU and RAM must be dedicated
  • Storage:

    • 500 GB total per server (to allow for snapshots etc.), including:
    • 50 GB minimum per Conferencing Node
    • SSD recommended
    • RAID 1 mirrored storage (recommended)
  • Hypervisors: VMware ESXi 6.7, 7.0 and 8.0; KVM
  • The Pexip Infinity VMs are delivered as VM images (.ova etc.) to be run directly on the hypervisor. No OS should be installed.

Special considerations for AMD EPYC systems

We recommend using 3rd-generation Intel Xeon Scalable Series processors (Ice Lake) and newer. AMD EPYC processors are supported, but the performance per core is notably lower than contemporary Intel parts. Where AMD EPYC processors are used:

  • We recommend using NPS4 and up to 32vcpu VMs i.e. four VMs per socket, and that each VM is pinned to one NUMA node.
  • For optimal performance, populate 8 DIMMs for 1 DIMMs per Channel (DPC) configuration.

CPU microcode updates and performance

Microcode updates for Intel and AMD CPUs can have a significant negative impact on transcoding performance. Customers hosting Pexip Infinity in their own trusted environment might choose to not apply these updates; if Pexip Infinity is the only application running on the hardware then security risk is minimal.

Low-level configuration


  • Hyper-Threading (also referred to as Hyper-Threading Technology), if supported, should always be left enabled by default.
  • When Hyper-Threading is in use, we recommend that Conferencing Nodes are NUMA pinned to their sockets to avoid memory access bottlenecks.

Sub-NUMA clustering

Sub-NUMA Clustering [SNC] should be turned off unless you are using the ultra-high density deployment model or you have been specifically recommended otherwise by your Pexip authorized support representative.

BIOS performance settings

Ensure all BIOS settings pertaining to power saving are set to maximize performance rather than preserve energy. (Setting these to an energy-preserving or balanced mode may impact transcoding capacity, thus reducing the total number of HD calls that can be provided.) The actual settings depend on the hardware vendor; some examples are given below:

Typical HP settings

  • HP Power Profile: Maximum Performance
  • Power Regulator modes: HP Static High Performance mode
  • Energy/Performance Bias: Maximum Performance
  • Memory Power Savings Mode: Maximum Performance

Typical Dell settings

  • System Profile: Performance Optimized

Typical Cisco UCS B-Series settings

  • System BIOS Profile (Processor Configuration) - CPU Performance: Enterprise
  • System BIOS Profile (Processor Configuration) - Energy Performance: Performance
  • VMware configuration: Active Policy: Balanced


  • Although the Conferencing Node server will normally not use more than 1-2 Mbps per video call, we recommend 1 Gbps network interface cards or switches to ensure free flow of traffic between Pexip Infinity nodes in the same datacenter. We do not recommend 100 Mbps NIC.
  • Redundancy: for hypervisors that support NIC Teaming (including VMware), you can configure two network interfaces for redundancy, connected to redundant switches (if this is available in your datacenter).


  • Although Pexip Infinity will work with SAS drives, we strongly recommend SSDs for both the Management Node and Conferencing Nodes. General VM processes (such as snapshots and backups) and platform upgrades will be faster with SSDs.
  • Management Node and Conferencing Node disks should be Thick Provisioned.
  • Pexip Infinity can absorb and recover relatively gracefully from short bursts of I/O latency but sustained latency will create problems.
  • The Management Node requires a minimum of 800 IOPs (but we recommend providing more wherever possible).
  • A Conferencing Node requires a minimum of 250 IOPs (but we recommend providing more wherever possible).
  • Deployment on SAN/NAS storage is possible, but local SSD is preferred. Interruption to disk access during software upgrades or machine startup can lead to failures.
  • Redundancy: when using our recommended RAID 1 mirroring for disk redundancy, remember to use a RAID controller supported by VMware or your preferred hypervisor. The RAID controller must have an enabled cache. Most vendors can advise which of the RAID controllers they provide are appropriate for your hypervisors.


  • Sufficient power to drive the CPUs. The server manufacturer will typically provide guidance on this.
  • Redundancy: Dual PSUs.