Deploying a Conferencing Node on an ESXi host
This process generates an .ova file that then must be deployed from within VMware on to an ESXi host.
- This file is specific to the Conferencing Node being deployed. It cannot be used to deploy multiple Conferencing Nodes.
- The file is single-use. It cannot be used to re-deploy the same Conferencing Node at a later date. To re-deploy the Conferencing Node, you must first delete it from the Pexip Infinity Management Node and from VMware, and then deploy a new Conferencing Node with the same configuration as the deleted node.
- Before you start, ensure that you are currently using the same machine that you will subsequently use to upload the generated file on to your host server.
To deploy a new Conferencing Node on to a VMware ESXi host:
- Go to and select .
You are now asked to provide the network configuration to be applied to the Conferencing Node, by completing the following fields:
Option Description Name Enter the name to use when referring to this Conferencing Node in the Pexip Infinity Administrator interface. Description An optional field where you can provide more information about the Conferencing Node. Role
This determines the Conferencing Node's role:
- Proxying Edge Node: a Proxying Edge Node handles all media and signaling connections with an endpoint or external device, but does not host any conferences — instead it forwards the media on to a Transcoding Conferencing Node for processing.
- Transcoding Conferencing Node: a Transcoding Conferencing Node handles all the media processing, protocol interworking, mixing and so on that is required in hosting Pexip Infinity calls and conferences. When combined with Proxying Edge Nodes, a transcoding node typically only processes the media forwarded on to it by those proxying nodes and has no direct connection with endpoints or external devices. However, a transcoding node can still receive and process the signaling and media directly from an endpoint or external device if required.
See Distributed Proxying Edge Nodes for more information. Hostname
Enter the hostname and domain to assign to this Conferencing Node. Each Conferencing Node and Management Node must have a unique hostname.
The Hostname and Domain together make up the Conferencing Node's DNS name or FQDN. We recommend that you assign valid DNS names to all your Conferencing Nodes. For more information, see Assigning hostnames and FQDNs.
IPv4 address Enter the IP address to assign to this Conferencing Node when it is created. Network mask Enter the IP network mask to assign to this Conferencing Node. Gateway IPv4 address Enter the IP address of the default gateway to assign to this Conferencing Node. Secondary interface IPv4 address The optional secondary interface IPv4 address for this Conferencing Node. If configured, this interface is used for signaling and media communications to clients, and the primary interface is used for communication with the Management Node and other Conferencing Nodes. For more information, see Conferencing Nodes with dual network interfaces (NICs). Secondary interface network mask The optional secondary interface network mask for this Conferencing Node. System location
Select the physical location of this Conferencing Node. A system location should not contain a mixture of proxying nodes and transcoding nodes.
If the system location does not already exist, you can create a new one here by clicking to the right of the field. This will open up a new window showing the For further information see About system locations.page.
SIP TLS FQDN A unique identity for this Conferencing Node, used in signaling SIP TLS Contact addresses. For more information, see SIP TLS FQDN. TLS certificate The TLS certificate to use on this node. This must be a certificate that contains the above SIP TLS FQDN. Each certificate is shown in the format <subject name> (<issuer>). IPv6 address The IPv6 address for this Conferencing Node. Each Conferencing Node must have a unique IPv6 address. Gateway IPv6 address
The IPv6 address of the default gateway.
If this is left blank, the Conferencing Node listens for IPv6 Router Advertisements to obtain a gateway address.
IPv4 static NAT address
The public IPv4 address used by this Conferencing Node when it is located behind a NAT device. Note that if you are using NAT, you must also configure your NAT device to route the Conferencing Node's IPv4 static NAT address to its IPv4 address.
For more information, see Configuring Pexip Infinity nodes to work behind a static NAT device.
Static routes From the list of Available Static routes, select the routes to assign to the node, and then use the right arrow to move the selected routes into the Chosen Static routes list. For more information, see Managing static routes. Enable distributed database
This should usually be enabled (checked) for all Conferencing Nodes that are expected to be "always on", and disabled (unchecked) for nodes that are expected to only be powered on some of the time (e.g. cloud bursting nodes that are likely to only be operational during peak times).
Determines whether this node can be accessed over SSH.
Use Global SSH setting: SSH access to this node is determined by the global Enable SSH setting ( ).
Off: this node cannot be accessed over SSH, regardless of the global Enable SSH setting.
On: this node can be accessed over SSH, regardless of the global Enable SSH setting.
Default: Use Global SSH setting.
- Select .
You are now asked to provide information regarding the deployment environment, CPUs and memory of the Conferencing Node, by completing the following fields:
Option Description Deployment type
Select Manual (ESXi 6.7 and above) or Manual (ESXi 6.0 and 6.5) as appropriate.
For more information on each of the available options, see Deployment environments.
Number of virtual CPUs to assign
Enter the number of virtual CPUs to assign to the Conferencing Node. We recommend no more than one virtual CPU per physical core, unless you are making use of CPUs that support hyperthreading — see NUMA affinity and hyperthreading for more details.
System memory (in megabytes) to assign
Enter the amount of RAM (in megabytes) to assign to the Conferencing Node. The number entered must be a multiple of 4.
We recommend 1024 MB (1 GB) RAM for each virtual CPU. The field automatically defaults to the recommended amount, based on the number of virtual CPUs you have entered.
Enter the password to use when logging in to this Conferencing Node's Linux operating system over SSH. The username is always admin.
Logging in to the operating system is required when changing passwords or for diagnostic purposes only, and should generally be done under the guidance of your Pexip authorized support representative. In particular, do not change any configuration using SSH — all changes should be made using the Pexip Infinity Administrator interface.
A message appears at the top of the page:
The Conferencing Node image will download shortly or click on the following link (download image now)
After a short while, a file with the name pexip-<hostname>.<domain>.ova is generated and downloaded.
Note that the generated file is only available for your current session so you should download it immediately.
When you want to deploy the Conferencing Node VM, use a vSphere client to log in to vCenter Server and select . Follow the on-screen prompts to deploy the .ova file; this is similar to the steps you used when deploying the Management Node. You should always deploy the nodes with Thick Provisioned disks.
After deploying a new Conferencing Node, it takes approximately 5 minutes before the node is available for conference hosting and for its status to be updated on the Management Node. Until it becomes available, the Management Node reports the status of the Conferencing Node as having a last contacted and last updated date of "Never". "Connectivity lost between nodes" alarms relating to that node may also appear temporarily.
You can only enable automatic startup after the Conferencing Node has been deployed.
To enable automatic startup using the vSphere web client:
- Log in to the VM manager (vCenter Server).
- From the navigation panel, select the tab and navigate to the host server on which the node's VM is installed.
- From the main panel, select the tab.
- From the left-hand panel, select Virtual Machines > VM Startup/Shutdown.
- At the top right of the page, select .
- In the System influence section, select Automatically start and stop the virtual machines with the system.
- Select .
We strongly recommend that you disable EVC (Enhanced vMotion Compatibility) for any ESXi clusters hosting Conferencing Nodes that include a mix of old and new CPUs. If EVC is enabled on such clusters, the Pexip Infinity platform will run more slowly because the Conferencing Nodes assume they are running on older hardware.
For more information, see Enhanced vMotion Compatibility (EVC).
To disable EVC:
- From the vSphere client's navigation panel, select the cluster.
- From the main panel, select the Configure tab.
From the left-hand panel, select Configuration > VMware EVC.
The current EVC settings are shown.
- At the top right of the page, select Edit .
- Select Disable EVC.