About system locations

System locations are typically used to group together Conferencing Nodes that are in the same physical location. System locations serve various purposes:

A Conferencing Node's system location is assigned when the node is deployed, but it can be subsequently modified.

If you change the system location of a Conferencing Node, all existing calls will be disconnected and the Conferencing Node will be restarted.

Intelligent and bandwidth efficient routing

Grouping Conferencing Nodes by location allows Pexip Infinity to make intelligent decisions about routing. For example, if a conference is taking place across many Conferencing Nodes in two different locations, then Pexip Infinity will nominate one node in each location to act as an intermediary for that location. Media streams are sent between each intermediary only, rather than multiple streams being sent between each of the nodes at each of the locations. For more information, see Conference distribution.

Calls that are received (for signaling purposes) can have their media received and transcoded by Conferencing Nodes within the same location as the node that is handling the signaling, or a different location can be nominated to handle the media. If the location handling the media reaches its transcoding capacity for a conference instance, additional "overflow" locations can be utilized to handle the media for new conference participants that connect to nodes within the original signaling location. For more information, see Overflow locations.

Locations can also be configured to contain Proxying Edge Nodes that handle the signaling and the media connection with the calling device, but proxy the media onto another location for transcoding purposes (see Deployments including Proxying Edge Nodes below).

When grouping Conferencing Nodes into different locations, you should consider the amount of packet loss within your network. If there is a chance of packet loss due to network congestion between different groups of nodes, then they should be assigned separate system locations even if they are in the same physical location.

Deployments including Proxying Edge Nodes

You can deploy your Pexip Infinity system as either a mix of Proxying Edge Nodes and Transcoding Conferencing Nodes, or as a system that only contains Transcoding Conferencing Nodes.

A typical deployment scenario is to use Proxying Edge Nodes as a front for many privately-addressed Transcoding Conferencing Nodes. Those outward-facing proxying nodes would receive all the signaling and media from endpoints and other external systems, and then forward that media onto other internally-located transcoding nodes to perform the standard Pexip Infinity transcoding, gatewaying and conferencing hosting functions.

A system location should not contain a mixture of proxying nodes and transcoding nodes. This separation of roles to locations simplifies load-balancing and conference distribution, and makes it easier for you to manage and monitor your externally-facing Proxying Edge Nodes distinctly from your Transcoding Conferencing Nodes. Hence, in the example scenario shown here, the Conferencing Nodes in the two locations "London Edge" and "Oslo Edge" are Proxying Edge Nodes and thus those nodes and locations are not involved in the actual hosting of any conferences. They forward the media onto the Transcoding Conferencing Nodes in the "London Transcoding" and "Oslo Transcoding" locations respectively.

Network deployment considerations

Nodes within a location actively synchronize with each other and may require a relatively high amount of network bandwidth for their communication. Pexip's backplane topology for distributed conferences assumes that there is a high availability, low latency, high-bandwidth network link between nodes within a location.

Therefore, while physical proximity is not a requirement, nodes in the same system location should typically be in the same physical datacenter or in physically proximate datacenters with an excellent link between them. There should be no more than 150 ms latency between Conferencing Nodes. If (as is often the case) you do not have such a network between your datacenters, we recommend that you consider assigning your nodes to different system locations.

In addition:

  • Conferencing Nodes in a DMZ must not be configured with the same System location as nodes in a local network. This is to ensure that load balancing is not performed across nodes in the DMZ and nodes in the local network.
  • Any Conferencing Nodes that are configured with a static NAT address must not be configured with the same System location as nodes that do not have static NAT enabled. This is to ensure that load balancing is not performed across nodes servicing external clients and nodes that can only service private IP addresses.
  • We recommend that a System location should not contain a mixture of on-premises and cloud-hosted (Azure, AWS or GCP) Conferencing Nodes.

See Network deployment options for more information about the various deployment scenarios.

Configuring services

Many platform and system services are configured on a per-location basis, and are used by all Conferencing Nodes in that location.

DNS and NTP servers

You must select at least one DNS server and at least one NTP server to be used by all of the Conferencing Nodes in that location. This allows you, for example, to assign nodes in a DMZ to a location that uses different DNS and NTP servers to a location containing nodes in a local, internal network.

H.323 gatekeepers, SIP proxies and Skype for Business / Lync servers

You can optionally specify the H.323 gatekeeper, SIP proxy, and Skype for Business / Lync server to use to route the outbound H.323/SIP/MSSIP call placed from a node within that location, when adding a new participant to a conference. These are the call control systems that are used when:

For more information, see About H.323 gatekeepers and SIP proxies and About Skype for Business / Lync servers.

Web proxies

You can specify a web proxy to use for outbound web requests from all Conferencing Nodes in a system location. For the current release, the web proxy will be used automatically for incident reporting and for any One-Touch Join-related requests. Support for other types of outbound web requests will be added in later releases. For more information, see Using a web proxy.

TURN servers and STUN servers

Pexip Conferencing Nodes can utilize a TURN server and negotiate TURN relays with the following ICE capable clients:

  • Skype for Business / Lync clients
  • WebRTC clients (the Infinity Connect web app on the latest browsers, and the desktop and mobile clients)

If these endpoints will be connecting to privately-addressed "on-premises" Conferencing Nodes, you must configure Pexip Infinity with the address of at least one TURN server that it can offer to ICE clients.

In some deployment scenarios where the TURN server is not located outside of the enterprise firewall, you may need to configure a separate STUN server so that each Conferencing Node can discover its public NAT address.

You can also configure the STUN servers to be used by Infinity Connect WebRTC clients when they connect to a Conferencing Node in this location.

For more information, see Using TURN servers with Pexip Infinity and Using STUN servers with Pexip Infinity.

SNMP NMSs

If you have enabled SNMP support on one or more Conferencing Nodes in a particular location, you must also select the SNMP Network Management System (NMS) to which SNMP traps will be sent. The selected NMS is used for all Conferencing Nodes in the location that have SNMP support enabled.

Pexip Infinity does not currently support traps with SNMPv3. If traps are required, use SNMPv2c.

For more information, see Monitoring via SNMP.

Policy profiles

Policy profiles specify how Pexip Infinity uses external policy and/or local policy to control its call policy and routing decisions. You can configure Pexip Infinity to use a different policy profile per system location.

For more information, see Using external and local policy to control Pexip Infinity behavior.

Event sinks

In busy deployments where live event reporting is required, you can configure event sinks for each location. These are external service(s) to which Conferencing Nodes in this location send event information. For more information, see About event sinks.

Configuring system locations

To add, edit or delete system locations, go to Platform > Locations.

You should wait at least 90 seconds for any changes in configuration to be synchronized to all Conferencing Nodes; this may take longer in large deployments. You can go to Status > Conferencing Nodes to check when configuration was last updated.

The available options are:

Option Description
Name The name you want to give to this physical location.
Description An optional field where you can provide more information about the location.
DNS servers From the list of configured DNS servers, select one or more DNS servers to be used by all the Conferencing Nodes in this location.
NTP servers From the list of configured NTP servers, select one or more NTP servers to be used by all the Conferencing Nodes in this location.
H.323 gatekeeper The H.323 gatekeeper to use for outbound H.323 calls from this location, when adding an H.323 participant to a conference. For more information, see About H.323 gatekeepers and SIP proxies.
SNMP NMS The Network Management System to which SNMP traps for all Conferencing Nodes in this location will be sent. For more information, see Monitoring via SNMP.
SIP proxy The SIP proxy to use for outbound SIP calls from this location, when adding a SIP participant to a conference. For more information, see About H.323 gatekeepers and SIP proxies.
Web proxy The web proxy to be used for outbound web requests from all Conferencing Nodes in this location. Currently this applies to incident reporting and One-Touch Joinrequests only.
Lync / Skype for Business server The Skype for Business / Lync server to use for outbound MS-SIP calls from this location, when adding a SfB/Lync participant to a conference. For more information, see About Skype for Business / Lync servers.
Microsoft Teams Connector The Teams Connector to use for outbound calls to Teams meetings from this location, if a Virtual Reception or Call Routing Rule does not explicitly specify the Teams Connector to use.
TURN server The TURN server to use when ICE clients (such as Skype for Business / Lync clients and Infinity Connect WebRTC clients) located outside the firewall connect to a Conferencing Node in this location. For more information, see Using TURN servers with Pexip Infinity.
STUN server The STUN server to be used by Conferencing Nodes in this location to determine the public IP address to signal to ICE clients (such as Skype for Business / Lync clients and Infinity Connect WebRTC clients) located outside the firewall. For more information, see Using STUN servers with Pexip Infinity.
Client STUN servers The STUN servers to be used by Infinity Connect WebRTC clients when they connect to a Conferencing Node in this location. For more information, see Using STUN servers with Pexip Infinity.
MTU

(Maximum Transmission Unit) — the size of the largest packet that can be transmitted via the network interfaces of the nodes in this location. It depends on your network topology as to whether you may need to specify an MTU value here.

If any of the Conferencing Nodes in this location are running in Google Cloud Platform, the MTU must not be higher than 1460 bytes.

Default: 1500

DSCP value for media

An optional setting used to prioritize different types of traffic in large, complex networks.

This DSCP value tags the media traffic from Conferencing Nodes in this system location that is sent line side to endpoints and over the IPsec backplanes to other Pexip Conferencing Nodes.

DSCP value for signaling

An optional Quality of Service (QoS) setting used to prioritize different types of traffic in large, complex networks.

This DSCP value tags the signaling traffic from Conferencing Nodes in this system location that is sent line side to endpoints and over the IPsec backplanes to other Pexip Conferencing Nodes.

Note that some IPsec traffic between nodes — configuration synchronization and other non-realtime traffic — remains untagged.

Also see DSCP value for management traffic in Global settings.

Transcoding location

The system location to handle media transcoding for calls (signaling) received in, or sent from, this location.

For calls received on a Proxying Edge Node, the media connection with the calling device is handled by a proxying node in this location, and the media is forwarded to a Transcoding Conferencing Node in the nominated Transcoding location (or an overflow location if necessary).

For calls received on a Transcoding Conferencing Node, the media connection with the calling device and the transcoding is handled by a Transcoding Conferencing Node in the nominated Transcoding location.

By default the transcoding location is set to This location i.e. the same location as where the call signaling is being handled, but you can change it to any of the other configured locations. You always need to choose a different transcoding location if this location contains Proxying Edge Nodes.

All calls have to be transcoded somewhere. You should ensure that the selected location contains Transcoding Conferencing Nodes, otherwise calls will fail due to insufficient resources (unless you have also configured a primary or secondary overflow location).

Note that if you change the media-handling locations for a proxying location (i.e. its transcoding, primary or secondary overflow locations), any proxied calls from that location that are currently being handled by the previously configured media locations will be dropped.

See Handling of media and signaling for more information.

Primary overflow location An optional field where you can select the system location to handle media when capacity is reached in the Transcoding location, for calls (signaling) being handled in this location. For more information, see Overflow locations.
Secondary overflow location An optional field where you can select the system location to handle media when capacity is reached in both the Transcoding location and the Primary overflow location, for calls (signaling) being handled in this location.
Pexip Infinity domain (for Lync / Skype for Business integration)

The name of the SIP domain that is routed from Microsoft Skype for Business / Lync to this Pexip Infinity location, either as a static route or via federation.

This is an optional field. If configured, it is used instead of the global Pexip Infinity domain in outbound calls to Skype for Business / Lync from Conferencing Nodes in this location.

Policy profile The policy profile to be used by Conferencing Nodes in this system location. For more information see Using external and local policy to control Pexip Infinity behavior.
Event sinks The external service(s) to which Conferencing Nodes in this location send event information. For more information, see About event sinks.
Enable PIN brute force resistance in this location

Whether PIN brute force resistance is enabled for the Conferencing Nodes in this location:

  • Use Global PIN brute force resistance setting: as per the global configuration setting.
  • No: PIN brute force resistance is disabled for nodes in this location.
  • Yes: PIN brute force resistance is enabled for nodes in this location.

When some locations have protection enabled, and other locations do not, the PIN brute force resistance setting is applied according to the location of the node that receives the call signaling.

Default: Use Global PIN brute force resistance setting.

Enable VOIP scanner resistance in this location

Whether VOIP scanner resistance is enabled for the Conferencing Nodes in this location:

  • Use Global VOIP scanner resistance setting: as per the global configuration setting.
  • No: VOIP scanner resistance is disabled for nodes in this location.
  • Yes: VOIP scanner resistance is enabled for nodes in this location.

When some locations have protection enabled, and other locations do not, the VOIP scanner resistance setting is applied according to the location of the node that receives the call signaling.

Default: Use Global VOIP scanner resistance setting.