You are here: Installation > Hypervisor configuration > Advanced VMware ESXi administration

Advanced VMware ESXi administration

Simple deployments of the Pexip Infinity platform should not require any special VMware knowledge or configuration beyond that described in Configuring VMware for Pexip Infinity.

This section describes some important requirements for advanced VMware ESXi administration when used with Pexip Infinity. It assumes that you are already familiar with VMware. For more information on VMware ESXi in general, see http://www.vmware.com/products/vsphere/esxi-and-esx/overview.html.

If an ESXi host is being managed by vCenter Server, all administration must be performed via vCenter Server. Do not log in directly to the ESXi host; configuration changes made in this way may be lost. To ensure that ESXi hosts being managed by vCenter Server are accessible via vCenter Server only and are not directly accessible, you should put them in Lockdown mode. Lockdown mode forces all operations to be performed through vCenter Server.

Supported vSphere versions

Version 12 of the Pexip Infinity platform supports VMware vSphere ESXi 5.x and 6.0, although we recommend ESXi 5.5 or 6.0. Support for ESXi 4.1 is being deprecated - if you have upgraded from a version prior to v12, you can still deploy Conferencing Nodes to servers running ESXi 4.1; however if you have a new deployment using v12 and attempt to deploy a Conferencing Node to a server running ESXi 4.1, that node will go straight into maintenance mode.

Supported vSphere editions

The Pexip Infinity platform will run on the free edition of vSphere Hypervisor. However, this edition has a number of limitations (limited support from VMware, no access to vCenter or vMotion, no access to VMware API). In particular, the lack of access to the VMware API means that all Conferencing Nodes will have to be deployed manually (see Deployment types). For this reason we do not recommend its use except in smaller deployments, or test or demo environments.

The minimum edition of VMware that we recommend is the vSphere Standard edition. This does not have the limitations of the free edition. If you do not already use VMware in your enterprise, the vSphere Essentials Kit is a simple way to get started and will provide you with Standard edition licenses for 3 servers (with 2 CPUs each) plus a vCenter license.

The Enterprise Plus edition includes further additional features relevant to the Pexip Infinity platform that could be of benefit to larger deployments. These include Storage DRS and Distributed Switch.

For a comparison of the VMware editions, see http://www.vmware.com/products/datacenter-virtualization/vsphere/compare-editions.html.

Management Node network requirements

When deploying Conferencing Nodes, the Management Node connects to the vCenter Server (or the ESXi host directly) on port 443 (https).

This communication port must be open when creating new Conferencing Nodes.

Permissions in vCenter Server (or on ESXi hosts)

A valid username and password for the vCenter Server or ESXi host must be entered every time a new Conferencing Node is created. For security and tracking reasons, these credentials will not be stored by the Management Node.

The account used to log in to vCenter Server or the ESXi host from the Management Node must have sufficient permissions to create virtual machines (VMs) on the folder or resource group where the Conferencing Node will be deployed. The permissions listed below are required as a minimum (in vCenter Server, these permissions should be set on Datacenter level or higher):

  • Datastore > Allocate space
  • Datastore > Browse datastore
  • Network > Assign network
  • Resource > Assign virtual machine to resource pool
  • vApp > Import
  • Virtual Machine > Configuration > Add new disk
  • Virtual Machine > Interaction > Configure CD media
  • Virtual Machine > Interaction > Power On

The Administrator role includes all the above permissions (in addition to many others).

Host server requirements

The recommended hardware requirements for the Management Node and Conferencing Node host servers are described in Server design recommendations. In addition to this:

  • GPU: host servers do not require any specific hardware cards or GPUs.
  • Disk: either direct attached storage or shared storage can be used. The primary disk activity will be logging.
  • Multitenancy: this version of Pexip Infinity requires a dedicated VMware host for supported deployments. Multitenancy with other applications may be supported in the future, and is possible in a test environment as long as other applications on the same host server are not consuming significant CPU and Pexip Infinity can be given reserved memory.

General recommendations

Pexip Infinity can take advantage of advanced CPU features, so for optimal performance we recommend that you run Conferencing Nodes on your newer host servers.

CPUs with a large cache (15–30 MB+) are recommended over CPUs with a smaller cache (4–10 MB), especially when running 10 or more participants per conference.

To protect the overall quality of the conference, we highly recommend that any hardware resources allocated to a Conferencing Node are reserved specifically for its own use.

Impact on virtual environment

CPU

The CPU is the most critical component in a successful deployment of the Pexip Infinity platform.

Newer Intel (or AMD) CPUs typically provide more features which Pexip Infinity will utilize to give better performance. We therefore recommend that you deploy Pexip Infinity on newer hardware, and move applications that are not so time-critical (for example, mail servers, web servers, file servers) to your older hardware.

Memory

The memory specified for the Pexip Infinity deployment should not be shared with other processes, because Pexip Infinity accesses memory at a high speed when active. However, the amount of memory needed is quite small compared to the workload, and increasing the memory beyond the recommended scope will not significantly increase performance.

Storage

Apart from storing the Pexip Infinity application, the disk activity during operation will mainly be used for logging. There is therefore no need to deploy your fastest or newest SSD drives for this application, as most of the real-time activity happens in memory. Standard disk access as required for most servers should be used to get good logging performance.

Network

Gigabit Ethernet connectivity from the host server is strongly recommended, because Conferencing Nodes are sending and receiving real-time audio and video data, and any network bottlenecks should be avoided. The amount of traffic to be expected can be calculated based on the capacity of the servers, but typically 100 Mbps network links can easily be saturated if there is a large number of calls going through a given Conferencing Node. In general, you can expect 1–3 Mbps per call connection, depending on call control setup.

Traffic shaping

Any shaping of the Conferencing Node traffic that can potentially limit its flow should not be used without considerable planning. If bandwidth usage to or from a Conferencing Node is too high, this should be addressed in the call control, as shaping it on the Conferencing Node level will most likely reduce the experience for the participants.

NIC teaming

VMware NIC teaming is a way to group several network interface cards (NICs) to behave as one logical NIC. When using NIC teaming in ESXi, we recommend you load balance using originating Virtual Port ID due to its low complexity (it does not steal CPU cycles from the host). Source MAC hash is also usable; we do not recommend IP hash because of the CPU overhead for a lot of media packets.

vMotion

Conferencing Nodes (and the Management Node) can be moved across host servers using vMotion.

You must put the Conferencing Node into maintenance mode and wait until all conferences on that node have finished before migrating it to another host server. See Taking a Conferencing Node out of service for more information.

For more information on vMotion in general, see http://www.vmware.com/products/datacenter-virtualization/vsphere/vmotion.html.

Enhanced vMotion Compatibility (EVC)

When EVC (Enhanced vMotion Compatibility) is enabled across a cluster of host servers, all servers in that cluster will emulate the lowest common denominator CPU. This allows you to move VMs between any servers in the cluster without any problems, but it means that if any servers in that cluster have newer-generation CPUs, their advanced features cannot be used.

Because Conferencing Nodes use the advanced features of newer-generation CPUs, (for example AVX on newer Intel CPUs), we recommend that you disable EVC (Enhanced vMotion Compatibility) for any clusters hosting Conferencing Nodes where the cluster includes a mix of new and old CPUs.

If you enable EVC on mixed-CPU clusters, the Pexip Infinity platform will run more slowly because it will cause the Conferencing Nodes to assume they are running on older hardware.

If you enable EVC, you must select the Sandy Bridge-compatible EVC mode as a minimum. This is the lowest EVC mode that supports the AVX instruction set, which is required to run the Pexip Infinity platform.

When enabling EVC or lowering the EVC mode, you should first power off any currently running VMs with a higher EVC mode than the one you intend to enable.

When disabling EVC or raising the EVC mode, any currently running VMs will not have access to the new level until they have been powered off and on again.

For instructions on disabling EVC, see Disabling EVC.

For more information on EVC in general, see http://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.vcenterhost.doc%2FGUID-9F444D9B-44A0-4967-8C07-693C6B40278A.html.

vSphere High Availability

vSphere High Availability (HA) can be configured so that, in the case of an ESXi host failure, it will automatically start the VM on another host in the cluster. This is supported for both Management Node and Conferencing Nodes and will provide protection against hosts failing.

Loss of a Conferencing Node in such circumstances will result in any participants connected to that node being disconnected. They will have to redial the Virtual Meeting Room alias to rejoin the conference.

Momentary loss of the Management Node will not affect running conferences.

For more information on HA, see http://www.vmware.com/products/datacenter-virtualization/vsphere/high-availability.html.

vSphere Fault Tolerance

For zero downtime, the Management Node can be protected with vSphere Fault Tolerance (FT), because it is only using a single virtual CPU.

For more information on FT, see http://www.vmware.com/products/datacenter-virtualization/vsphere/fault-tolerance.html.