Deploying new Conferencing Nodes
Conferencing Nodes are virtual machines that provide the capacity for conferences. They handle all conference media and signaling.
You can deploy your Pexip Infinity system as either a mix of Proxying Edge Nodes and Transcoding Conferencing Nodes, or as a system that only contains Transcoding Conferencing Nodes.
A typical deployment scenario is to use Proxying Edge Nodes as a front for many privately-addressed Transcoding Conferencing Nodes. Those outward-facing proxying nodes would receive all the signaling and media from endpoints and other external systems, and then forward that media onto other internally-located transcoding nodes to perform the standard Pexip Infinity transcoding, gatewaying and conferencing hosting functions.
There is no limit on the number of Conferencing Nodes that you can add to the Pexip Infinity platform. However, each Conferencing Node must have a unique:
- IP address
- DNS name (hostname and domain)
For secure deployments, you should also:
Conferencing Nodes can be deployed with dual network interfaces (NICs). Note that you must specify both interface addresses when initially deploying the Conferencing Node; you may also need to assign a static route while deploying the node. For more information, see Conferencing Nodes with dual network interfaces (NICs).
After deploying a new Conferencing Node, it takes approximately 5 minutes before the node is available for conference hosting and for its status to be updated on the Management Node. Until it is available, the Management Node will report the status of the Conferencing Node as having a last contacted and last updated date of "Never". "Connectivity lost between nodes" alarms relating to that node may appear temporarily.
All Conferencing Nodes have identical service configuration, which is obtained automatically from the Management Node.
Do not use VMware, Hyper-V or any other tools to clone instances of existing Conferencing Node virtual machines (VMs). Conferencing Nodes must always be created using the Pexip Infinity Administrator interface or management API.
See Configuring existing Conferencing Nodes for information about changing the details of an existing Conferencing Node.
- All host servers must be synchronized with accurate time before you install the Pexip Infinity Management Node or Conferencing Nodes on them.
- You must enable NTP on the Pexip Infinity Management Node before you deploy any Conferencing Nodes.
When deploying a new Conferencing Node, a generic instance of a Conferencing Node VM is created, and then it is configured with its specific details such as its IP address and hostname.
This deployment process has been automated (or partly automated) for on-premises environments using VMware ESXi, Microsoft Hyper-V, KVM or Xen hypervisors. When deploying in a cloud environment such as Microsoft Azure, Amazon Web Services (AWS) or Google Cloud Platform (GCP), you must also create a suitable VM instance in that environment to host your Conferencing Node before applying a generic configuration file. You can also use a generic VM template to deploy a Conferencing Node in other environments using non-supported hypervisors or orchestration layers.
The different deployment environment options are described in the table below. Your Pexip Infinity platform can contain Conferencing Nodes deployed in any combination of these environments.
Pexip Infinity deploys the Conferencing Node VM on a host server running VMware vSphere ESXi 5.x or higher.
Choose this option if you want to deploy the Conferencing Node immediately, and there is network connectivity to the host server on which the Conferencing Node is to be deployed.Automatic deployment of Conferencing Nodes in VMware environments was deprecated in Pexip Infinity v23 and will no longer be available from v25. From that version onwards, you must deploy your Conferencing Nodes manually, as with other hypervisor environments. As a consequence, VM managers will no longer be required or supported from v25.
Manual (ESXi 6.x)
Manual (ESXi 5.x)
Manual (ESXi 4.1)
Pexip Infinity generates an .ova file that you must then manually deploy from within VMware on to an ESXi host in order to create the Conferencing Node VM.
Choose one of these options (selecting the appropriate ESXi host version for your environment) if the Management Node does not have existing connectivity to the host server on which the Conferencing Node is to be deployed, or you do not want to deploy the Conferencing Node immediately. For more information, see Manually deploying a Conferencing Node on an ESXi host.
(As of 1 January 2020, deployments using vSphere ESXi 4.1 are no longer supported.)
Pexip Infinity generates a file that you must then manually deploy on a host server running either Microsoft Hyper-V Server 2012 and higher, or Windows Server 2012 and higher, in order to create the Conferencing Node VM.
Choose this option for all deployments that use Hyper-V.
For more information, see Manually deploying a Conferencing Node on a Hyper-V host.
Choose this option to generate an .ova file that is suitable for deploying on a KVM host in order to create the Conferencing Node VM.
For more information, see Manually deploying a Conferencing Node on a KVM host.
Choose this option to generate an .ova file that is suitable for deploying on a Xen host in order to create the Conferencing Node VM.
For more information, see Manually deploying a Conferencing Node on a Xen host.
This option is most typically used when deploying a Conferencing Node in a cloud environment.
For specific deployment information about each platform, see:
Pexip Infinity generates a file containing the configuration of the Conferencing Node. You then upload this file to a generic Conferencing Node that has been created from the Pexip-supplied VM template, in order to configure it with the appropriate settings.
You can also choose this option for any deployments that do not use ESXi, Hyper-V, KVM or Xen hypervisors, or for Hyper-V in a cloud-based environment. For more information, see Deploying a Conferencing Node using a generic VM template and configuration file.