Advanced VMware ESXi administration

Simple deployments of the Pexip Infinity platform should not require any special VMware knowledge or configuration beyond that described in Configuring VMware for Pexip Infinity.

This section describes some important requirements for advanced VMware ESXi administration when used with Pexip Infinity. It assumes that you are already familiar with VMware. For more information on VMware ESXi in general, see http://www.vmware.com/products/esxi-and-esx.html.

If an ESXi host is being managed by vCenter Server, all administration must be performed via vCenter Server. Do not log in directly to the ESXi host; configuration changes made in this way may be lost. To ensure that ESXi hosts being managed by vCenter Server are accessible via vCenter Server only and are not directly accessible, you should put them in Lockdown mode. Lockdown mode forces all operations to be performed through vCenter Server.

This topic covers:

Supported vSphere versions

Version 35.1 of the Pexip Infinity platform supports VMware vSphere ESXi 6.7, 7.0 and 8.0.

Standalone ESXi hosts are not supported.

Supported vSphere editions

The Pexip Infinity platform will run on the free edition of vSphere Hypervisor. However, this edition has a number of limitations (limited support from VMware, no access to vCenter or vMotion). For this reason we do not recommend its use except in smaller deployments, or test or demo environments.

The minimum edition of VMware that we recommend is the vSphere Standard edition. This does not have the limitations of the free edition. If you do not already use VMware in your enterprise, the vSphere Essentials Kit is a simple way to get started and will provide you with Standard edition licenses for 3 servers (with 2 CPUs each) plus a vCenter license.

The Enterprise Plus edition includes further additional features relevant to the Pexip Infinity platform that could be of benefit to larger deployments. These include Storage DRS and Distributed Switch.

For a comparison of the VMware editions, see http://www.vmware.com/products/vsphere.html#compare.

Host server requirements

The recommended hardware requirements for the Management Node and Conferencing Node host servers are described in Server design recommendations. In addition to this:

  • GPU: host servers do not require any specific hardware cards or GPUs.
  • Disk: either direct attached storage or shared storage can be used. The primary disk activity will be logging.
  • Multitenancy: this version of Pexip Infinity requires a dedicated VMware host for supported deployments. Multitenancy with other applications may be supported in the future, and is possible in a test environment as long as other applications on the same host server are not consuming significant CPU and Pexip Infinity can be given reserved memory.

General recommendations

Pexip Infinity can take advantage of advanced CPU features, so for optimal performance we recommend that you run Conferencing Nodes on your newer host servers.

CPUs with a large cache (15–30 MB+) are recommended over CPUs with a smaller cache (4–10 MB), especially when running 10 or more participants per conference.

To protect the overall quality of the conference, we highly recommend that any hardware resources allocated to a Conferencing Node are reserved specifically for its own use.

Impact on virtual environment

CPU

The CPU is the most critical component in a successful deployment of the Pexip Infinity platform.

Newer Intel (or AMD) CPUs typically provide more features which Pexip Infinity will utilize to give better performance. We therefore recommend that you deploy Pexip Infinity on newer hardware, and move applications that are not so time-critical (for example, mail servers, web servers, file servers) to your older hardware.

Memory

The memory specified for the Pexip Infinity deployment should not be shared with other processes, because Pexip Infinity accesses memory at a high speed when active. However, the amount of memory needed is quite small compared to the workload, and increasing the memory beyond the recommended scope will not significantly increase performance.

Storage

Apart from storing the Pexip Infinity application, the disk activity during operation will mainly be used for logging. There is therefore no need to deploy your fastest or newest SSD drives for this application, as most of the real-time activity happens in memory. Standard disk access as required for most servers should be used to get good logging performance. Although Pexip Infinity will work with SAS drives, we strongly recommend SSDs for both the Management Node and Conferencing Nodes. General VM processes (such as snapshots and backups) and platform upgrades will be faster with SSDs.

Network

Gigabit Ethernet connectivity from the host server is strongly recommended, because Conferencing Nodes are sending and receiving real-time audio and video data, and any network bottlenecks should be avoided. The amount of traffic to be expected can be calculated based on the capacity of the servers, but typically 100 Mbps network links can easily be saturated if there is a large number of calls going through a given Conferencing Node. In general, you can expect 1–3 Mbps per call connection, depending on call control setup.

Traffic shaping

Any shaping of the Conferencing Node traffic that can potentially limit its flow should not be used without considerable planning. If bandwidth usage to or from a Conferencing Node is too high, this should be addressed in the call control, as shaping it on the Conferencing Node level will most likely reduce the experience for the participants.

NIC teaming

VMware NIC teaming is a way to group several network interface cards (NICs) to behave as one logical NIC. When using NIC teaming in ESXi, we recommend you load balance based on originating virtual port ID due to its low complexity (it does not steal CPU cycles from the host). You can also load balance based on source MAC hash; however we do not recommend IP hash because of the high CPU overhead when a large number of media packets are involved.

Upgrading VM hardware versions

The virtual hardware version is fixed at the time a Management Node or Conferencing Node VM is first deployed. If you subsequently upgrade your Pexip Infinity deployment, you may need to manually upgrade the VM hardware version to ensure you can make use of support for the most recent CPU instruction sets.

Pexip Infinity supports hardware versions 11 and later. Note that:

  • AVX2 instruction set requires ESXi 6.0+ and VM hardware version 11+
  • AVX512 instruction set requires ESXi 6.7+ and VM hardware version 14+

Management Node: we recommend upgrading the hardware version of the Management Node VM to match the ESXi host version that the Management Node is running on.

Conferencing Nodes: we recommend upgrading the hardware version of the Conferencing Node VMs to at least match the ESXi host version that you are running in your environment.

See https://kb.vmware.com/s/article/1010675 for ESXi version to virtual hardware version compatibility information, and instructions on upgrading a VM's hardware version (vmversion).

vMotion

Conferencing Nodes (and the Management Node) can be moved across host servers using vMotion.

You must put the Conferencing Node into maintenance mode and wait until all conferences on that node have finished before migrating it to another host server. See Taking a Conferencing Node out of service for more information.

For more information on vMotion in general, see http://www.vmware.com/products/vsphere/features/vmotion.html.

Enhanced vMotion Compatibility (EVC)

When EVC (Enhanced vMotion Compatibility) is enabled across a cluster of host servers, all servers in that cluster will emulate the lowest common denominator CPU. This allows you to move VMs between any servers in the cluster without any problems, but it means that if any servers in that cluster have newer-generation CPUs, their advanced features cannot be used.

Because Conferencing Nodes use the advanced features of newer-generation CPUs, (for example AVX and later on newer Intel CPUs), we recommend that you disable EVC (Enhanced vMotion Compatibility) for any clusters hosting Conferencing Nodes where the cluster includes a mix of new and old CPUs.

If you enable EVC on mixed-CPU clusters, the Pexip Infinity platform will run more slowly because it will cause the Conferencing Nodes to assume they are running on older hardware.

If you enable EVC, you must select the Sandy Bridge-compatible EVC mode as a minimum. This is the lowest EVC mode that supports the AVX instruction set, which is the minimum required to run the Pexip Infinity platform.

When enabling EVC or lowering the EVC mode, you should first shut down any currently running VMs with a higher EVC mode than the one you intend to enable.

When disabling EVC or raising the EVC mode, any currently running VMs will not have access to the new level until they have been shut down and restarted.

For instructions on disabling EVC, see Disabling EVC.

For more information on EVC in general, see https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.vcenterhost.doc/GUID-9F444D9B-44A0-4967-8C07-693C6B40278A.html.