Deploying a Conferencing Node on an ESXi host

This process generates an .ova file that then must be deployed from within VMware on to an ESXi host.

Note that:

  • This file is specific to the Conferencing Node being deployed. It cannot be used to deploy multiple Conferencing Nodes.
  • The file is single-use. It cannot be used to re-deploy the same Conferencing Node at a later date. To re-deploy the Conferencing Node, you must first delete it from the Pexip Infinity Management Node and from VMware, and then deploy a new Conferencing Node with the same configuration as the deleted node.
  • Before you start, ensure that you are currently using the same machine that you will subsequently use to upload the generated file on to your host server.

Generating, downloading and deploying the ova file

  1. From the Pexip Infinity Administrator interface, go to Platform > Conferencing Nodes and select Add Conferencing Node.
  2. You are now asked to provide the network configuration to be applied to the Conferencing Node, by completing the following fields:

    Option Description
    Name Enter the name to use when referring to this Conferencing Node in the Pexip Infinity Administrator interface.
    Description An optional field where you can provide more information about the Conferencing Node.
    Role

    This determines the Conferencing Node's role:

    • Proxying Edge Node: a Proxying Edge Node handles all media and signaling connections with an endpoint or external device, but does not host any conferences — instead it forwards the media on to a Transcoding Conferencing Node for processing.
    • Transcoding Conferencing Node: a Transcoding Conferencing Node handles all the media processing, protocol interworking, mixing and so on that is required in hosting Pexip Infinity calls and conferences. When combined with Proxying Edge Nodes, a transcoding node typically only processes the media forwarded on to it by those proxying nodes and has no direct connection with endpoints or external devices. However, a transcoding node can still receive and process the signaling and media directly from an endpoint or external device if required.

    See Distributed Proxying Edge Nodes for more information.

    Hostname

    Domain

    Enter the hostname and domain to assign to this Conferencing Node. Each Conferencing Node and Management Node must have a unique hostname.

    The Hostname and Domain together make up the Conferencing Node's DNS name or FQDN. We recommend that you assign valid DNS names to all your Conferencing Nodes. For more information, see Assigning hostnames and FQDNs.

    IPv4 address

    Enter the IP address to assign to this Conferencing Node when it is created.

    Network mask

    Enter the IP network mask to assign to this Conferencing Node.

    Note that IPv4 address and Network mask apply to the eth0 interface.

    Gateway IPv4 address

    Enter the IP address of the default gateway to assign to this Conferencing Node.

    Note that the Gateway IPv4 address is not directly associated with a network interface, except that the address entered here lies in the subnet in which either eth0 or eth1 is configured to use. Thus, if the gateway address lies in the subnet in which eth0 lives, then the gateway will be assigned to eth0, and likewise for eth1.

    Secondary interface IPv4 address

    The optional secondary interface IPv4 address for this Conferencing Node. If configured, this interface is used for signaling and media communications to clients, and the primary interface is used for communication with the Management Node and other Conferencing Nodes. For more information, see Conferencing Nodes with dual network interfaces (NICs).

    Secondary interface network mask

    The optional secondary interface network mask for this Conferencing Node.

    Note that Secondary interface IPv4 address and Secondary interface network mask apply to the eth1 interface.

    System location

    Select the physical location of this Conferencing Node. A system location should not contain a mixture of proxying nodes and transcoding nodes.

    If the system location does not already exist, you can create a new one here by clicking to the right of the field. This will open up a new window showing the Add system location page. For further information see About system locations.

    Configured FQDN A unique identity for this Conferencing Node, used in signaling SIP TLS Contact addresses. For more information, see Assigning a Configured FQDN.
    TLS certificate The TLS certificate to use on this node. This must be a certificate that contains the above Configured FQDN. Each certificate is shown in the format <subject name> (<issuer>).
    IPv6 address

    The IPv6 address for this Conferencing Node. Each Conferencing Node must have a unique IPv6 address.

    Gateway IPv6 address

    The IPv6 address of the default gateway.

    If this is left blank, the Conferencing Node listens for IPv6 Router Advertisements to obtain a gateway address.

    IPv4 static NAT address

    The public IPv4 address used by this Conferencing Node when it is located behind a NAT device. Note that if you are using NAT, you must also configure your NAT device to route the Conferencing Node's IPv4 static NAT address to its IPv4 address.

    For more information, see Configuring Pexip Infinity nodes to work behind a static NAT device.

    Static routes From the list of Available Static routes, select the routes to assign to the node, and then use the right arrow to move the selected routes into the Chosen Static routes list. For more information, see Managing static routes.
    Enable distributed database

    This should usually be enabled (checked) for all Conferencing Nodes that are expected to be "always on", and disabled (unchecked) for nodes that are expected to only be powered on some of the time (e.g. cloud bursting nodes that are likely to only be operational during peak times).

    Enable SSH

    Determines whether this node can be accessed over SSH.

    Use Global SSH setting: SSH access to this node is determined by the global Enable SSH setting (Platform > Global settings > Connectivity > Enable SSH).

    Off: this node cannot be accessed over SSH, regardless of the global Enable SSH setting.

    On: this node can be accessed over SSH, regardless of the global Enable SSH setting.

    Default: Use Global SSH setting.

    SSH authorized keys

    You can optionally assign one or more SSH authorized keys to use for SSH access.

    From the list of Available SSH authorized keys, select the keys to assign to the node, and then use the right arrow to move the selected keys into the Chosen SSH authorized keys list.

    Note that in cloud environments, this list does not include any of the SSH keys configured within that cloud service.

    For more information, see Configuring SSH authorized keys.

    Use SSH authorized keys from cloud service

    When a node is deployed in a cloud environment, you can continue to use the SSH keys configured within the cloud service where available, in addition to any of your own assigned keys (as configured in the field above). If you disable this option you can only use your own assigned keys.

    Default: enabled.

  3. Select Save.
  4. You are now asked to complete the following fields:

    Option Description
    Deployment type

    Select Manual (ESXi 8.0 and above), Manual (ESXi 7.0) or Manual (ESXi 6.7) as appropriate.

    Number of virtual CPUs to assign Enter the number of virtual CPUs to assign to the Conferencing Node. We recommend no more than one virtual CPU per physical core, unless you are making use of CPUs that support Hyper-Threading — see NUMA affinity and hyperthreading for more details.
    System memory (in megabytes) to assign

    Enter the amount of RAM (in megabytes) to assign to the Conferencing Node. The number entered must be a multiple of 4.

    We recommend 1024 MB (1 GB) RAM for each virtual CPU. The field automatically defaults to the recommended amount, based on the number of virtual CPUs you have entered.

    SSH password

    Enter the password to use when logging in to this Conferencing Node's Linux operating system over SSH. The username is always admin.

    Logging in to the operating system is required when changing passwords or for diagnostic purposes only, and should generally be done under the guidance of your Pexip authorized support representative. In particular, do not change any configuration using SSH — all changes should be made using the Pexip Infinity Administrator interface.

  5. Select Download.

    A message appears at the top of the page: "The Conferencing Node image will download shortly or click on the following link".

    After a short while, a file with the name pexip-<hostname>.<domain>.ova is generated and downloaded.

    Note that the generated file is only available for your current session so you should download it immediately.

  6. When you want to deploy the Conferencing Node VM, use a vSphere client to log in to vCenter Server. Select the VMs and Templates tab, click on the Actions menu and select Deploy OVF Template....
  7. Follow the on-screen prompts to deploy the .ova file; this is similar to the steps you used when deploying the Management Node. You should always deploy the nodes with Thick Provisioned disks.

After deploying a new Conferencing Node, it takes approximately 5 minutes before the node is available for conference hosting and for its status to be updated on the Management Node. Until it becomes available, the Management Node reports the status of the Conferencing Node as having a last contacted and last updated date of "Never". "Connectivity lost between nodes" alarms relating to that node may also appear temporarily.

Enabling automatic startup

After deploying a new Conferencing Node from VMware, you must enable automatic startup of that virtual machine (VM). In VMware, automatic startup is disabled by default for every new VM — which means that if the host server is powered down for any reason, when it restarts the VM will not restart and must be started manually.

You can only enable automatic startup after the Conferencing Node has been deployed.

To enable automatic startup using the vSphere web client (HTML 5):

  1. Log in to the VM manager (vCenter Server).
  2. From the navigation panel, select the Hosts and Clusters tab and navigate to the host server on which the node's VM is installed.
  3. From the main panel, select the Configure tab.
  4. From the left-hand panel, select Virtual Machines > VM Startup/Shutdown.
  5. At the top right of the page, select Edit.
  6. In the System influence section, select Automatically start and stop the virtual machines with the system.
  7. Select OK.

Disabling EVC

We strongly recommend that you disable EVC (Enhanced vMotion Compatibility) for any ESXi clusters hosting Conferencing Nodes that include a mix of old and new CPUs. If EVC is enabled on such clusters, the Pexip Infinity platform will run more slowly because the Conferencing Nodes assume they are running on older hardware.

For more information, see Enhanced vMotion Compatibility (EVC).

To disable EVC:

  1. From the vSphere client's navigation panel, select the cluster.
  2. From the main panel, select the Configure tab.
  3. From the left-hand panel, select Configuration > VMware EVC.

    The current EVC settings are shown.

  4. At the top right of the page, select Edit .
  5. Select Disable EVC.

Next steps

See Testing and next steps after initial installation.