Network deployment options

If you need to support business-to-business video calls and provide access to Pexip Infinity resources from external systems and endpoints such as remote or federated Skype for Business / Lync clients, remote SIP and H.323 endpoints, and Infinity Connect clients, you need to consider how to deploy your Pexip Infinity Conferencing Nodes.

This section explains how the Pexip Infinity platform fits into typical network deployment scenarios:

For additional flexibility you can deploy Conferencing Nodes with dual network interfaces (one "internal" interface for inter-node communication, and one "external" interface for signaling and media to endpoints and other video devices).

The Pexip Infinity platform can also be deployed as a cloud service via Amazon Web Services (AWS), Microsoft Azure or Google Cloud Platform, with private, public or hybrid deployment options.

General network requirements

Note that in all Pexip Infinity deployment scenarios:

  • The Management Node must be able to reach all Conferencing Nodes (Proxying Edge Nodes and Transcoding Conferencing Nodes) and vice versa.
  • Each Conferencing Node must be able to reach every other Conferencing Node (Proxying Edge Nodes and Transcoding Conferencing Nodes), except:
    • If a location only contains Proxying Edge Nodes, then those proxying nodes in that location only require IPsec connectivity with:

      • any other proxying nodes in that location
      • all nodes in the transcoding location, and the primary and secondary overflow locations that are associated with that location
      • the Management Node.

      This means that the proxying nodes in one location do not need to have a direct network connection to other proxying nodes in other locations.

      (If the location does not have an associated transcoding location, primary or secondary overflow location defined, or if it contains a mix of proxying nodes and transcoding nodes, then those proxying nodes must be able to reach all other Conferencing Nodes.)

  • Any internal firewalls must be configured to allow UDP port 500 and traffic using IP protocol 50 (ESP) in both directions between all Pexip nodes.
  • There cannot be a NAT between any Pexip nodes.

Privately-addressed "on-premises" Transcoding Conferencing Nodes

If you have an on-premises deployment of privately-addressed Pexip Infinity Transcoding Conferencing Nodes, we recommend that you deploy publicly-routable Proxying Edge Nodes to connect external clients and systems to those on-premises transcoding nodes.

The following examples also show how your Pexip Infinity platform may be deployed alongside any existing third-party call control systems and how it can be integrated with on-premises Microsoft Skype for Business / Lync.

Proxying Edge Nodes in combination with third-party video call control

This example deployment scenario uses Proxying Edge Nodes in combination with third-party video call control:

In this type of deployment scenario:

  • The Management Node and the Transcoding Conferencing Nodes are deployed with private IP addresses in the local enterprise network.
  • External Infinity Connect clients (WebRTC and RTMP) connect to Proxying Edge Nodes. Their signaling is terminated on the proxying node and their media is proxied through to the internal Transcoding Conferencing Nodes. In addition you can optionally deploy a reverse proxy for load balancing and resiliency, or branding.
  • Federated Skype for Business / Lync clients connect to Proxying Edge Nodes. Their signaling is terminated on the proxying node and their media is proxied through to the internal Transcoding Conferencing Nodes (an on-premises Skype for Business / Lync server is not needed). As proxying nodes support ICE, a TURN server is not required.
  • External SIP and H.323 endpoints and other forms of business-to-business video calls connect to the Transcoding Conferencing Nodes via a firewall traversal / video call control solution such as a Cisco VCS.
  • The external firewall can optionally be a NAT device.

For information about integrating Pexip Infinity with other third-party call management systems, see Integrating Pexip Infinity with other systems.

For more information about proxying nodes, see Deployment guidelines for Proxying Edge Nodes.

Routing all external calls via Proxying Edge Nodes

This option extends the previous example by using Proxying Edge Nodes to handle all external calls.

This example deployment scenario is the same as the previous one (with third-party call control), except that:

  • SIP and H.323 devices connect directly to Proxying Edge Nodes, which terminate the signaling and proxy their media through to the Transcoding Conferencing Nodes. A third-party call control solution is not required.
  • Those SIP and H.323 devices can also register to a Proxying Edge Node if required.

Combining with on-premises Microsoft Skype for Business / Lync

If you have on-premises Microsoft Skype for Business / Lync, you can deploy on-premises Conferencing Nodes alongside your SfB/Lync servers and route external clients as appropriate either via your Proxying Edge Nodes or via your SfB/Lync Edge server.

For example your deployment may look like this:

In this deployment scenario:

  • The Management Node and the Transcoding Conferencing Nodes are deployed with private IP addresses in the local enterprise network.
  • External SIP and H.323 endpoints and other forms of business-to-business video calls connect to the Transcoding Conferencing Nodes via a firewall traversal / video call control solution such as a Cisco VCS.
  • External Infinity Connect clients (WebRTC and RTMP) connect to Proxying Edge Nodes. Their signaling is terminated on the proxying node and their media is proxied through to the internal Transcoding Conferencing Nodes. In addition you can optionally deploy a reverse proxy for load balancing and resiliency, or branding.
  • Federated SfB/Lync calls to the Pexip Infinity video subdomain (e.g. @vc.example.com) are routed through Proxying Edge Nodes.
  • Remote corporate SfB/Lync clients are routed through your Skype for Business / Lync Edge server as normal, but they can also make Fusion gateway calls to the Pexip Infinity video subdomain (e.g. @vc.example.com) — in which case media is routed through the SfB/Lync Edge server providing the internal Transcoding Conferencing Node can route to the public facing interface of the SfB/Lync Edge server (otherwise a TURN server is required).
  • Federated calls to your SfB/Lync domain (e.g. @example.com) are routed through your Skype for Business / Lync Edge server as normal.
  • Calls from external SIP, H.323 and Infinity Connect clients can be gatewayed via Pexip Infinity to SfB/Lync clients or SfB/Lync meetings if required (see Using Pexip Infinity as a Skype for Business / Lync gateway for more information).

Alternatively, your deployment may look like this, which is the same as the one above except that it also routes SIP and H.323 devices through Proxying Edge Nodes, instead of via third-party call control:

See Example deployment in an on-prem Skype for Business / Lync environment for more information about deploying Pexip Infinity with on-premises Microsoft Skype for Business / Lync.

Publicly-addressable Transcoding Conferencing Nodes

You can deploy all of your Transcoding Conferencing Nodes in a public DMZ, as shown below.

In this type of deployment:

  • One or more Transcoding Conferencing Nodes are deployed in a public DMZ. They have publicly-reachable IP addresses — either directly or via static NAT.
  • External SIP and H.323 endpoints and other forms of business-to-business video calls can connect directly to the public-facing IP addresses of the Conferencing Nodes.
  • Remote federated Skype for Business / Lync clients connect to Pexip Infinity and media can be routed directly through the public-facing IP addresses of the Conferencing Nodes (an on-premises Skype for Business / Lync server is not needed). As Conferencing Nodes support ICE, a TURN server is not required.
  • Infinity Connect clients (WebRTC and RTMP) can connect to Pexip Infinity and media can be routed directly to the Conferencing Nodes. In addition you can optionally deploy a reverse proxy for load balancing and resiliency, or branding.
  • In large deployments you may want to consider whether to deploy some dedicated Proxying Edge Nodes to manage the signaling and media connections between endpoints situated on the public internet and your Transcoding Conferencing Nodes.
  • Note that the Management Node will typically be deployed with a private IP address in the local enterprise network. You must ensure that there is no NAT between the Management Node and the Conferencing Nodes in the DMZ.

For more information about how to configure Conferencing Nodes that are deployed behind a static NAT device, see Network routing and addressing options for Conferencing Nodes and Firewall/NAT routing and addressing examples.

Conferencing Nodes with dual network interfaces (NICs)

For additional deployment flexibility, you can configure a secondary network address on a Conferencing Node. Dual NICs are supported on both Transcoding Conferencing Nodes and Proxying Edge Nodes.

You would typically deploy a Conferencing Node with dual network interfaces when it is connected to a dedicated video zone or it is being deployed in a public DMZ where the primary interface would be to an internal, private network segment within the enterprise (where it can connect to the Management Node and other Conferencing Nodes) and the secondary interface would be towards the video zone or on the publicly-addressable side of the DMZ perimeter network and used for connecting to external endpoints and devices.

When a secondary network address is configured:

  • The primary address is always used for inter-node communication to the Management Node and to other Conferencing Nodes.
  • SSH connections can be made only to the primary interface.
  • The secondary address is always used for signaling and media (to endpoints and other video devices).
  • Connections to DNS, SNMP, NTP, syslog and so on, go out from whichever interface is appropriate, based on routing.
  • You can have a mixture of any number of single-interfaced and dual-interfaced Conferencing Nodes, providing all nodes can communicate with each other via their primary interfaces.

Note that dual network interfaces are not supported on Conferencing Nodes deployed in public cloud services (Azure, AWS or GCP).

For more information on configuring dual network interfaces, see Deploying Conferencing Nodes with dual network interfaces (NICs) and Firewall/NAT routing and addressing examples.

Deploying as a cloud service via Microsoft Azure, Amazon Web Services (AWS) or Google Cloud Platform (GCP)

The Pexip Infinity platform can be deployed in the Microsoft Azure, Amazon Web Services (AWS) or Google Cloud Platform (GCP) cloud.

There are three main deployment options for your Pexip Infinity platform when using a cloud service:

  • Private cloud: all nodes are deployed within a private cloud service. Private addressing is used for all nodes and connectivity is achieved by configuring a VPN tunnel between the corporate network and the cloud network. As all nodes are private, this is equivalent to an on-premises deployment which is only available to users internal to the organization.
  • Public cloud: all nodes are deployed within the cloud network. All nodes have a private address but, in addition, public IP addresses are allocated to each node. The node's private addresses are only used for inter-node communications. Each node's public address is then configured on the relevant node as a static NAT address. Access to the nodes is permitted from the public internet, or a restricted subset of networks, as required. Any systems or endpoints that will send signaling and media traffic to those Pexip Infinity nodes must send that traffic to the public address of those nodes. If you have internal systems or endpoints communicating with those nodes, you must ensure that your local network allows such routing.
  • Hybrid cloud: the Management Node, and optionally some Conferencing Nodes, are deployed in the corporate network. A VPN tunnel is created between the corporate network and the cloud network. Additional Conferencing Nodes are deployed in the cloud network and are managed from the on-premises Management Node. The cloud-hosted Conferencing Nodes can be either internally-facing, privately-addressed (private cloud) nodes; or externally-facing, publicly-addressed (public cloud) nodes; or a combination of private and public nodes (where the private nodes are in a different Pexip Infinity system location to the public nodes). You may also want to consider dynamic bursting, where the cloud-hosted Conferencing Nodes are only started up and used when you have reached capacity on your on-premises nodes.

All of the Pexip nodes that you deploy in the cloud are completely dedicated to running the Pexip Infinity platform— you maintain full data ownership and control of those nodes.

For specific deployment information about each platform, see: