Tech Docs

Server design recommendations

This section summarizes the terminology, recommended specifications and deployment guidelines for servers hosting the Pexip Infinity platform.

Terminology

The table below provides descriptions for the terms used in this guide, in the context of a Pexip Infinity deployment.

Term Description
Processor The hardware within a computer that carries out the basic computing functions. It can consist of multiple cores.
Core One single physical processing unit. Intel Xeon E5 typically has 8 cores (10, 12 or more in newer versions)
Socket The socket on the motherboard where one processor is installed.
RAM
Also referred to as "memory modules". The hardware that stores data which is accessed by the processor core while executing programs.
Virtual CPU (vCPU)

The VM's understanding of how many CPU cores it requires. Each vCPU appears as a single CPU core to the guest operating system.

When configuring a Conferencing Node, you are asked to enter the number of virtual CPUs to assign to it. We recommend no more than one virtual CPU per physical core, unless you are making use of CPUs that support hyperthreading — see NUMA affinity and hyperthreading for more details.

NUMA node The combination of a processor (consisting of one or more cores) and its associated memory.

Management Node

Recommended host server specifications

  • 4 cores* (most modern processors will suffice)
  • 4 GB RAM*
  • 100 GB SSD storage
  • The Pexip Infinity VMs are delivered as VM images (.ova etc.) to be run directly on the hypervisor. No OS should be installed.

* Sufficient for deployments of up to 30 Conferencing Nodes. For larger deployments, you will need to increase the amount of RAM and number of cores. For smaller test and development deployments, 2 cores will suffice. For guidance on Management Node sizing, consult your Pexip authorized support representative or your Pexip Solution Architect.

Conferencing Node

Below are our general recommendations for Conferencing Node (Proxying Edge Nodes and Transcoding Conferencing Nodes) servers. For some specific examples, see Example Conferencing Node server configurations.

Recommended host server specifications

  • AVX instruction set required. We recommend Intel Xeon Scalable Processors (Skylake) Gold 61xx generation or E5-2600 v3/v4 Haswell/Broadwell architecture from 2014 or later. Also works with Xeon E5-2600 v1/v2 processors (Sandy Bridge/Ivy Bridge from 2012 or later).
  • 2.3 GHz (or faster) clock speed
  • 10-12 physical cores per socket
  • 1 GB RAM for each vCPU that is allocated to the Conferencing Node
  • 4 memory modules per processor socket, with all memory channels populated
  • Storage: 50 GB minimum per Conferencing Node; 500 GB total per server (to allow for snapshots etc.)
  • RAID 1 mirrored storage
  • Hypervisors: VMware ESXi 5.5 or 6.0; KVM; Xen; Microsoft Hyper-V 2012 or later
  • The Pexip Infinity VMs are delivered as VM images (.ova etc.) to be run directly on the hypervisor. No OS should be installed.

Transcoding Conferencing Nodes versus Proxying Edge Nodes

The specifications and guidelines shown below apply to Transcoding Conferencing Nodes.

The servers hosting Proxying Edge Nodes do not need to have as high a specification as those servers hosting Transcoding Conferencing Nodes. This is because proxying nodes are not as processor intensive as transcoding nodes. However, you still need multiple proxying nodes for resilience and capacity.

We recommend allocating 4 vCPU and 4 GB RAM (which must both be dedicated resource) to each Proxying Edge Node, with a maximum of 8 vCPU and 8 GB RAM for large deployments.

General deployment recommendations for Transcoding Conferencing Nodes

Cores, CPU and RAM

  • Prefer processors with high core count (10 cores or more per CPU).
  • Prefer processors with a high clock speed (2.3 GHz and higher).
  • Prefer a smaller number of large Conferencing Nodes (e.g. 4 x 10-core nodes), rather than large number of small Conferencing Nodes (e.g. 10 x 4-core nodes).
  • A single Conferencing Node must not be assigned more vCPU than the amount of physical cores on each processor socket. (An exception to this rule is when NUMA affinity is enabled.)
  • For each physical CPU core (or logical thread, if employing NUMA affinity):
    • configure 1 vCPU
    • assign at least 1 GB RAM

    For example, on an E5-2680v2 CPU with 10 physical cores (i.e. 20 logical threads) per CPU, either

    • assign 10 vCPU (one per physical core) and 10 GB of RAM, or
    • enable NUMA affinity, and assign 20 vCPU (one per logical thread) and 20 GB of RAM
  • A Conferencing Node must have 4 vCPU and 4 GB RAM as an absolute minimum.
  • Do not over-commit either RAM or CPU resources on hardware hosts. In other words, the Conferencing Node and Management Node each must have dedicated access to their own RAM and CPU cores. Pexip Conferencing Nodes use real-time media, which needs dedicated capacity.
  • We recommend 8 memory modules for a dual E5-2600 configuration, as each CPU has 4 memory channels. 8 x 4 GB should be sufficient for such deployments as we recommend 1 GB RAM per vCPU. Some vendors do not provide modules smaller than 8 GB, so in that case we suggest 8 x 8 GB. (This is more than required, but it could be useful if the server is repurposed in the future.)
  • Populate memory equally across all NUMA nodes/sockets on a single host server. All memory channels (typically 4 per CPU for E5-2600; 6 per CPU for Gold 61xx) must be populated.
  • For high performance clusters dedicated to Pexip Infinity, you can achieve 30-50% additional performance by using NUMA affinity and taking advantage of hyperthreading (for CPUs supporting this). For more information, see NUMA affinity and hyperthreading.

Hyperthreading

  • Hyperthreading (also referred to as Hyper-Threading Technology), if supported, should always be left enabled by default.

BIOS performance settings

  • Ensure all BIOS settings pertaining to power saving are set to maximize performance rather than preserve energy. (Setting these to an energy-preserving or balanced mode may impact transcoding capacity, thus reducing the total number of HD calls that can be provided.) The actual settings depend on the hardware vendor; some examples are given below:

    Typical HP settings

    • HP Power Profile: Maximum Performance
    • Power Regulator modes: HP Static High Performance mode
    • Energy/Performance Bias: Maximum Performance
    • Memory Power Savings Mode: Maximum Performance

    Typical Dell settings

    • System Profile: Performance Optimized

    Typical Cisco UCS B-Series settings

    • System BIOS Profile (Processor Configuration) - CPU Performance: Enterprise
    • System BIOS Profile (Processor Configuration) - Energy Performance: Performance
    • VMware configuration: Active Policy: Balanced

Network

  • Although the Conferencing Node server will normally not use more than 1-2 Mbps per video call, we recommend 1 Gbps network interface cards or switches to ensure free flow of traffic between Pexip Infinity nodes in the same datacenter. We do not recommend 100 Mbps NIC.
  • Redundancy: for hypervisors that support NIC Teaming (including VMware), you can configure two network interfaces for redundancy, connected to redundant switches (if this is available in your datacenter).

Disk

  • Pexip Infinity will work with higher speed SAS or Near-Line SAS drives but we recommend SSD drives for the Management Node. For Conferencing Nodes, SSDs are not a requirement, but general VM processes such as snapshots and backups will be faster with SSDs.
  • Pexip Infinity can absorb and recover relatively gracefully from short bursts of I/O latency but sustained latency will create problems.

  • Deployment on SAN/NAS storage should in most cases work well. Disk access is only required by the operating system and logs, so a normal fair performance is expected.
  • Redundancy: For RAID 1 mirroring for disk redundancy, remember to use a RAID controller supported by VMware or your preferred hypervisor. Most vendors can advise which of the RAID controllers they provide are appropriate for your hypervisors.

Power

  • Sufficient power to drive the CPUs. The server manufacturer will typically provide guidance on this.
  • Redundancy: Dual PSUs.