Deployment guidelines for Proxying Edge Nodes

You can deploy your Pexip Infinity system as either a mix of Proxying Edge Nodes and Transcoding Conferencing Nodes, or as a system that only contains Transcoding Conferencing Nodes.

A typical deployment scenario is to use Proxying Edge Nodes as a front for many privately-addressed Transcoding Conferencing Nodes. Those outward-facing proxying nodes would receive all the signaling and media from endpoints and other external systems, and then forward that media onto other internally-located transcoding nodes to perform the standard Pexip Infinity transcoding, gatewaying and conferencing hosting functions.

If you have a large Pexip deployment with 5 or more Conferencing Nodes, or where you transcode media in multiple locations such as within a DMZ, you should consider deploying Proxying Edge Nodes in addition to your Transcoding Conferencing Nodes. You can easily switch the role of a deployed Conferencing Node from a Transcoding Conferencing Node to a Proxying Edge Node and vice versa.

This topic covers:

Configuration summary

To enable Proxying Edge Nodes on your Pexip Infinity platform:

  1. A system location should not contain a mixture of proxying nodes and transcoding nodes. Hence you should decide if you need to create new locations and assign the Conferencing Nodes that you want to use as proxying nodes to those locations:

    • Go to Platform > Locations to configure your locations.
    • Go to Platform > Conferencing Nodes to change the location assigned to any existing Conferencing Nodes.

    If you change the system location of a Conferencing Node, all existing calls will be disconnected and the Conferencing Node will be restarted.

  2. Assign a Role of Proxying Edge Node to those Conferencing Nodes that you want to deploy as proxying nodes:

    • For existing nodes, do this via Platform > Conferencing Nodes.
    • If you are deploying new Conferencing Nodes, you select the Role when providing the name and network configuration settings for the new node.

    If you change the role of a Conferencing Node, all existing calls will be disconnected and the Conferencing Node will be restarted.

  3. Ensure that the locations containing your proxying nodes are not configured with a Transcoding location of This location. Instead, set the Transcoding location to a location that contains transcoding nodes.
  4. Ensure that your call control systems and DNS are configured to route calls to your proxying nodes only.
  5. Ensure that your proxying nodes have appropriate certificates installed (Platform > TLS certificates.).

    In Microsoft Skype for Business and Lync integrations, your proxying nodes that take federated calls must have a proper certificate. The certificates on any externally facing nodes must be signed by a public CA provider (and thus will be trusted by a Skype for Business / Lync Edge Server). Any internally-facing nodes typically need certificates signed by your private CA.

Full deployment guidelines and example scenarios are described below.

Configuring a Conferencing Node's proxying or transcoding role

Each Conferencing Node is configured with a role — either as a Proxying Edge Node or as a Transcoding Conferencing Node. You specify the Conferencing Node's role when it is first deployed, but you can change its role later if required.

To change a Conferencing Node's role:

  1. Go to Platform > Conferencing Nodes and select the name of the Conferencing Node.
  2. Select the new Role as appropriate and select Save.

If you change the role of a Conferencing Node, all existing calls will be disconnected and the Conferencing Node will be restarted.

If a call is received in a location that contains Proxying Edge Nodes, that location must be configured with a Transcoding location that contains your Transcoding Conferencing Nodes.

Deployment recommendations

If you want to deploy Proxying Edge Nodes in your Pexip Infinity platform, we recommend the following guidelines:

  • A system location should not contain a mixture of proxying nodes and transcoding nodes. This separation of roles to locations simplifies load-balancing and conference distribution, and makes it easier for you to manage and monitor your externally-facing Proxying Edge Nodes distinctly from your Transcoding Conferencing Nodes. Hence, in the example scenario shown here, the Conferencing Nodes in the two locations "London Edge" and "Oslo Edge" are Proxying Edge Nodes and thus those nodes and locations are not involved in the actual hosting of any conferences. They forward the media onto the Transcoding Conferencing Nodes in the "London Transcoding" and "Oslo Transcoding" locations respectively.
  • The location containing your proxying nodes should set its Transcoding location to a location that contains your transcoding nodes. You can also optionally specify a primary overflow location and a secondary overflow location for additional transcoding resources i.e. the proxying nodes will proxy to the Transcoding location in the first instance, then proxy to the primary and then the secondary overflow locations.
  • When a location contains Proxying Edge Nodes, those nodes only require IPsec connectivity with:

    • any other proxying nodes in that location
    • all nodes in the transcoding location, and the primary and secondary overflow locations that are associated with that location
    • the Management Node.

    This means that the proxying nodes in one location do not need to have a direct network connection to other proxying nodes in other locations.

  • Where you have a geographically distributed platform, each physical region should typically have one location containing 1-3 proxying nodes (for resilience and capacity), and one or more additional locations containing transcoding nodes (typically between 2-25 nodes per location depending on capacity requirements).
  • Ensure that your call control systems and DNS are configured to route calls to your proxying nodes only, and ensure that your proxying nodes have appropriate certificates installed. (If a call is routed to a transcoding node it can still accept the signaling and handle the call media to and from the endpoint, but we recommend that you defer these functions to your proxying nodes.)
  • Lineside media handling is allocated to the least-loaded proxying node in the location that receives the signaling (it will not allocate the media to proxying nodes in other locations — there is currently no location overflow for proxying nodes). The node that receives the signaling always continues to handle the signaling.
  • The proxying node must then forward the media onto a transcoding node. Pexip Infinity's standard media allocation rules are used to decide which transcoding node will receive the proxied media: in the first case it will use a transcoding node in the Transcoding location that is associated with the location of the proxying node that is forwarding the media. If there is no capacity to host the conference in the transcoding location then a transcoding node in the primary overflow location, or the secondary overflow location is used (if overflow locations are configured).
  • As with all deployment scenarios, there cannot be NAT between any Pexip nodes, but there can be NAT between external devices/internet and proxying nodes. Proxying nodes support all call protocols (Skype for Business / Lync, H.323, SIP, WebRTC and RTMP), and can be deployed with dual network interfaces and static NAT if required.
  • You only need to deploy certificates on your Proxying Edge Nodes — only those nodes that handle the signaling connection to an endpoint or other system (such as a Skype for Business / Lync Edge Server) need to be configured with the appropriate certificates to allow that endpoint/system to communicate with Pexip Infinity. If you subsequently deploy more Transcoding Conferencing Nodes to increase conferencing capacity, you do not need to add certificates onto those additional nodes.
  • If you use dynamic bursting to a cloud service, the nodes in your cloud overflow locations should all be Transcoding Conferencing Nodes. You only have to ensure that your proxying nodes can route to your cloud-hosted nodes subnet — as, in this scenario, endpoints will never connect directly to a cloud-hosted node. Note that you cannot increase the your proxying resources via dynamic bursting — you can only have dynamic bursting of transcoding nodes.
  • The servers hosting Proxying Edge Nodes do not require as high a specification as those servers hosting Transcoding Conferencing Nodes. This is because proxying nodes are not as processor intensive as transcoding nodes. The minimum functional CPU instruction set for a proxying node is AVX, which was first available in the Sandy Bridge generation. You still need multiple proxying nodes for resilience and capacity. We recommend allocating 4 vCPU and 4 GB RAM (which must both be dedicated resource) to each Proxying Edge Node, with a maximum of 8 vCPU and 8 GB RAM for large or busy deployments.

Deployment scenarios

Depending on your requirements, there are a range of scenarios where you can make use of a Distributed Edge deployment. These are explained below and range from a basic DMZ Edge scenario to an advanced multi-Edge deployment with dynamic cloud bursting.

Some of these scenarios require routed connections from DMZ nodes — see Applying static routes to enable routing between externally-facing nodes and local network nodes for more information.

Note that in all Pexip Infinity deployment scenarios:

  • The Management Node must be able to reach all Conferencing Nodes (Proxying Edge Nodes and Transcoding Conferencing Nodes) and vice versa.
  • Each Conferencing Node must be able to reach every other Conferencing Node (Proxying Edge Nodes and Transcoding Conferencing Nodes), except:
    • When a location contains Proxying Edge Nodes, those nodes only require IPsec connectivity with:

      • any other proxying nodes in that location
      • all nodes in the transcoding location, and the primary and secondary overflow locations that are associated with that location
      • the Management Node.

      This means that the proxying nodes in one location do not need to have a direct network connection to other proxying nodes in other locations.

  • Any internal firewalls must be configured to allow UDP port 500 and traffic using IP protocol 50 (ESP) in both directions between all Pexip nodes.
  • There cannot be a NAT between any Pexip nodes.

The deployment examples use the following symbols:

Symbol Description   Symbol Description
Lineside (endpoint/client) media path.   Single NIC Proxying Edge Node with or without lineside NAT.
Pexip to Pexip inter-node media path.   Dual NIC Proxying Edge Node.
Blue = line side interface (clients/endpoints).
Orange = Pexip internal interface.
Pexip to Pexip inter-node signaling.   Single NIC Transcoding Conferencing Node with or without lineside NAT.
Conferencing Node lineside interface (blue dots). These are the only connection points that endpoints/clients see.   Single NIC Transcoding Conferencing Node. Does not (in these scenarios) need to talk to clients as all media is proxied.
Conferencing Node internal interface (orange dots).      

Basic public DMZ deployment with single NIC Proxying Edge Node

This deployment scenario shows a Proxying Edge Node with a single NIC in a public DMZ. In this example:

  • The Proxying Edge Node in the DMZ has a public IP address which is NATted to a private IP address.
  • The Proxying Edge Node's private IP address in the DMZ needs a routed connection to the enterprise network (10.40.0.x).
  • The Transcoding Conferencing Node in the enterprise network also supports lineside connections to devices in the enterprise network.

Basic public DMZ deployment with dual NIC Proxying Edge Node

This deployment scenario is identical to the previous example, except in this case the Proxying Edge Node has a dual NIC. In this example:

  • The secondary NIC on the Proxying Edge Node in the DMZ has a public IP address (which can optionally be NATted).
  • The primary NIC on the Proxying Edge Node has a private IP address in the DMZ which needs a routed connection to the enterprise network (10.40.0.x).

Basic public DMZ deployment with additional cloud-hosted transcoding resource

This deployment scenario builds on the previous example by adding additional overflow Transcoding Conferencing Node resources on a cloud-hosted service. In this example:

  • Enterprise-based devices could get their media assigned to overflow nodes in the cloud service if the on-premises Transcoding Conferencing Nodes are at full capacity.
  • The cloud-hosted overflow Transcoding Conferencing Nodes could be "always on" or configured for dynamic bursting.

Multi-Edge deployment with public DMZ and video zone routing

This deployment scenario uses another Proxying Edge Node to handle signaling and media traffic between the enterprise network and the video conferencing zone. In this example we have:

  • Dual NIC Proxying Edge Node in the DMZ: the secondary lineside NIC has a public IP address (which can optionally be NATted); the primary NIC has a private IP address in the DMZ which needs a routed connection to the enterprise network (10.40.0.x).
  • Dual NIC Proxying Edge Node in the video conferencing network: the secondary lineside NIC only needs connectivity to the video conferencing zone endpoints (192.168.5.x). The primary internal interface needs a routed connection to the enterprise network.
  • A routed connection is not required between proxying nodes in the DMZ and the proxying nodes in the enterprise network.
  • No endpoints/clients talk directly to the Transcoding Conferencing Nodes (their media is terminated lineside on Proxying Edge Nodes).

Multi-Edge deployment with public DMZ, video zone and PC zone routing

This deployment scenario adds to the previous example by deploying another Proxying Edge Node to manage connections to a PC network on another internal subnet. Thus, in this example:

  • The private IP addresses in the DMZ (172.16.0.x) and the internal NICs (10.50.0.x) on the proxying nodes handling the video conferencing and PC networks need a routed connection to the enterprise network (10.40.0.x).
  • The video conferencing network lineside interface only needs connectivity to video conferencing zone endpoints (192.168.5.x), and the PC network lineside interface only needs connectivity to the PC network (10.200.1.x).
  • A routed connection is not required between proxying nodes in the DMZ and the proxying nodes in the enterprise network.

Multi-Edge deployment with public DMZ, video zone, PC zone routing and cloud resources

This deployment scenario brings together all of the elements from the previous examples:

  • The private IP addresses in the DMZ (172.16.0.x) and the internal NICs (10.50.0.x) on the proxying nodes handling the video conferencing and PC networks need a routed connection to the enterprise network (10.40.0.x).
  • The video conferencing network lineside interface only needs connectivity to video conferencing zone endpoints (192.168.5.x), and the PC network lineside interface only needs connectivity to the PC network (10.200.1.x).
  • No endpoints/clients talk directly to the Transcoding Conferencing Nodes in the enterprise network or the cloud-hosted network (their media is terminated lineside on Proxying Edge Nodes). Only the Pexip nodes need to be able to route traffic to the cloud-hosted nodes.
  • A routed connection is not required between proxying nodes in the DMZ and the proxying nodes in the enterprise network.

Microsoft Skype for Business and Lync integration (public DMZ / hybrid)

This deployment scenario shows how you can integrate multiple proxying nodes with a public DMZ / hybrid Skype for Business / Lync environment. In this example:

  • The Pexip Infinity environment SIP domain is vc.example.com.
  • Your Skype for Business / Lync federation DNS SRV record _sipfederationtls._tcp.vc.example.com is associated with the hostname sip.vc.example.com.
  • Round-robin DNS A-records are configured for the sip.vc.example.com hostname that point to the IP addresses of your proxying nodes, for example:

    Hostname Host IP address
    sip.vc.example.com. 198.51.99.200
    sip.vc.example.com. 198.51.99.201

    and there are also the "standard" A-records that exist for each proxying node based on their individual hostnames and resolve to the same IP addresses, for example:

    Hostname Host IP address
    proxy01.vc.example.com. 198.51.99.200
    proxy02.vc.example.com. 198.51.99.201

Microsoft Skype for Business and Lync integration (on-premises)

This deployment scenario shows how you can integrate internal proxying nodes with an on-premises Skype for Business / Lync environment. In this example:

  • The Pexip Infinity environment SIP domain is vc.example.com.
  • The Skype for Business / Lync environment has a static SIP domain route for the SIP domain vc.example.com from the Front End Pool towards a trusted application pool of local proxying nodes.
  • Additional proxying nodes are deployed in the DMZ for connectivity with other external devices.

Additional information when deploying Proxying Edge Nodes

  • Any Conferencing Node can be a Proxying Edge Node or a Transcoding Conferencing Node, regardless of the hypervisor or cloud platform it is deployed on.
  • Even though any media encryption/decryption is performed by a Proxying Edge Node, all subsequent communications with Transcoding Conferencing Nodes is over a secure IPsec connection.
  • Proxying Edge Nodes can be deployed anywhere in your network (they do not have to be in a DMZ, for example), but they must be able to forward the media onto at least one Transcoding Conferencing Node.
  • Devices can register to Proxying Edge Nodes or to Transcoding Conferencing Nodes.
  • A Proxying Edge Node uses less resources per connection than is required by a node that is performing transcoding. A proxying node uses approximately the equivalent of 3 audio-only resources to proxy a video call (of any resolution), and 1 audio-only resource to proxy an audio call.
  • Media streams for presentation content to/from an endpoint are forwarded in the same manner as video/audio streams, however they could be handled by different Proxying Edge Nodes (and Transcoding Conferencing Nodes).
  • Bandwidth limitations cannot be applied to the forwarding connection between a Proxying Edge Node and a Transcoding Conferencing Node.
  • Proxying Edge Nodes do not affect the call licensing requirements for an endpoint connection.