Deploying Pexip Infinity on Microsoft Azure Virtual Machines

The Microsoft Azure Virtual Machines (VMs) service provides scalable computing capacity in the Microsoft Azure cloud. Using Azure eliminates your need to invest in hardware up front, so you can deploy Pexip Infinity even faster.

You can use Azure to launch as many or as few virtual servers as you need, and use those virtual servers to host a Pexip Infinity Management Node and as many Conferencing Nodes as required for your Pexip Infinity platform.

Azure enables you to scale up or down to handle changes in requirements or spikes in conferencing requirements. You can also use the Azure APIs and the Pexip Infinity management API to monitor usage and bring up / tear down Conferencing Nodes as required to meet conferencing demand, or allow Pexip Infinity to handle this automatically for you via its dynamic bursting capabilities.

Pexip publishes disk images for the Pexip Infinity Management Node and Conferencing Nodes. These images may be used to launch instances of each node type as required.

This flowchart provides an overview of the basic steps involved in deploying the Pexip Infinity platform on Azure:

Deployment models

Azure has two deployment models: Classic and Resource Manager.

Resource Manager is the recommended deployment model for new workloads and is the only model supported by Pexip Infinity.

Deployment options

There are three main deployment options for your Pexip Infinity platform when using the Azure cloud:

  • Private cloud: all nodes are deployed within Azure. Private addressing is used for all nodes and connectivity is achieved by configuring a VPN tunnel between the corporate network and Azure. As all nodes are private, this is equivalent to an on-premises deployment which is only available to users internal to the organization.
  • Public cloud: all nodes are deployed within Azure. All nodes have a private address but, in addition, public IP addresses are allocated to each node. The node's private addresses are only used for inter-node communications. Each node's public address is then configured on the relevant node as a static NAT address. Access to the nodes is permitted from the public internet, or a restricted subset of networks, as required. Any systems or endpoints that will send signaling and media traffic to those Pexip Infinity nodes must send that traffic to the public address of those nodes. If you have internal systems or endpoints communicating with those nodes, you must ensure that your local network allows such routing.
  • Hybrid cloud: the Management Node, and optionally some Conferencing Nodes, are deployed in the corporate network. A VPN tunnel is created between the corporate network and Azure. Additional Conferencing Nodes are deployed in Azure and are managed from the on-premises Management Node. The Azure-hosted Conferencing Nodes can be either internally-facing, privately-addressed (private cloud) nodes; or externally-facing, publicly-addressed (public cloud) nodes; or a combination of private and public nodes (where the private nodes are in a different Pexip Infinity system location to the public nodes). You may also want to consider dynamic bursting, where the Azure-hosted Conferencing Nodes are only started up and used when you have reached capacity on your on-premises nodes.

Private, public and hybrid cloud deployment options

All of the Pexip nodes that you deploy in the cloud are completely dedicated to running the Pexip Infinity platform— you maintain full data ownership and control of those nodes.

Limitations

The following limitations currently apply:

  • The OS username is always admin, regardless of any other username configured through the Azure Portal.

  • SSH keys are the preferred authentication mechanism for Pexip Infinity instances hosted in the Azure Cloud. Password-based authentication also works, however, and will use the password provisioned at instance deployment time.

    Note that:

    • When the Management Node has been deployed, you can assign and use your own SSH keys for the Management Node and any Conferencing Nodes — see Configuring SSH authorized keys for details.
    • If you are using a Linux or Mac SSH client to access your instance you must use the chmod command to make sure that your private key file on your local client (SSH private keys are never uploaded) is not publicly viewable. For example, if the name of your private key file is my-key-pair.pem, use the following command: chmod 400 /path/my-key-pair.pem

    See https://docs.microsoft.com/en-us/azure/virtual-machines/linux/ssh-from-windows for more information about using SSH on Azure.

  • We do not support Azure deployments in China.

Recommended instance types and call capacity guidelines

Azure instances come in many different sizes. In general, Pexip Infinity Conferencing Nodes should be considered compute intensive and Management Nodes reflect a more general-purpose workload. Our Server design recommendations also apply to cloud-based deployments.

For deployments of up to 20 Conferencing Nodes, we recommend using:

  • Management Node: an F4s v2 instance.
  • Transcoding Conferencing Nodes: either an F8s v2 instance for smaller deployments, or an F16s v2 or F32s v2 instance for larger deployments.
  • Proxying Edge Nodes: an F4s v2 instance.

An F8s v2 instance should provide capacity for approximately 15 HD / 37 SD / 270 audio-only calls per Transcoding Conferencing Node. An F16s v2 instance should provide capacity for approximately 30 HD / 70 SD / 450 audio-only calls, and an F32s v2 instance provides approximately 56 HD / 112 SD / 880 audio-only calls.

If the Fsv2 series is not available in your region you can use an F-series for your nodes.

Note that the actual VM/processor deployed by Azure for your requested instance type can vary, which can affect the capacity of your Conferencing Node. If you see greatly different transcoding capacities reported for different Conferencing Nodes then you can review the node status to see which CPU model has been allocated to that VM.

Capacity planning

By default, Azure Resource Manager virtual machine cores have a regional total limit and a regional per series (F, Fsv2, etc.) limit, that are enforced per subscription. Typically, for each subscription, the default quota allows up to 10-20 CPU cores per region and 10-20 cores per series. An F8s v2 instance uses 8 CPU cores. Thus, with the default limits in place, only two F8s v2 instances may be deployed (as only 4 CPU cores will remain in either quota pool, which is insufficient for another F8s v2 instance).

The allocated quota may be increased by opening a support ticket with Microsoft via the Azure Portal. Ensure that you request a sufficient number of CPU cores. For example, if 10 Transcoding Conferencing Nodes are required, then the quota must be increased to 8 cores x 10 F8s v2 instances = 80 CPU cores of type F8s v2. It may take a number of days for the quota increase request to be processed. For more information see this article.

IP addressing

Within a Virtual Network, an instance's private IP addresses can initially be allocated dynamically (using DHCP) or statically. However, after the private IP address has been assigned to the instance it remains fixed and associated with that instance until the instance is terminated. The allocated IP address is displayed in the Azure portal.

Public IP addresses may be associated with an instance. Public IPs may be dynamic (allocated at launch/start time) or statically configured. Dynamic public IP addresses do not remain associated with an instance if it is stopped — and thus it will receive a new public IP address when it is next started.

Pexip Infinity nodes must always be configured with the private IP address associated with its instance, as it is used for all internal communication between nodes. To associate an instance's public IP address with the node, configure that public IP address as the node's Static NAT address (via Platform > Conferencing Nodes).

Assumptions and prerequisites

The Pexip Infinity deployment instructions assume that within Azure you have already:

  • signed up for Azure and created a user account, administrator groups etc.
  • decided in which Azure region/location to deploy your Pexip Infinity platform (one Management Node and one or more associated Conferencing Nodes)
  • created a Resource Group, Virtual Network, and Storage Account in the chosen Azure region (see Preparing your Azure environment)
  • (if necessary) configured a VPN tunnel from the corporate/management network to the Azure Virtual Network
  • set your subscription's Microsoft partner ID to Pexip (see Configuring your Azure subscription)
  • created a Network Security Group (see Configuring Azure Network Security Groups for port requirements)

See this article for more information on setting up your Azure Virtual Machine environment.

Multiple Virtual Networks

Pexip Infinity node instances that are hosted on Azure may be deployed across multiple Azure Virtual Networks (VNets), where each Azure VNet (and the Conferencing Nodes within it) maps onto a Pexip Infinity system location. See Configuring Azure Network Security Groups for port requirements when using multiple VNets.

You can deploy your Conferencing Nodes in Azure across peered regions (Global VNet Peering); see this article for more information about virtual network peering. Or you can use a VNet-to-VNet VPN gateway connection; see this article for information about how to create a connection between Azure VNets.