Deploying Pexip Infinity on Microsoft Azure Virtual Machines
You can use Azure to launch as many or as few virtual servers as you need, and use those virtual servers to host a Pexip Infinity Management Node and as many Conferencing Nodes as required for your Pexip Infinity platform.
Azure enables you to scale up or down to handle changes in requirements or spikes in conferencing requirements. You can also use the Azure APIs and the Pexip Infinity management API to monitor usage and bring up / tear down Conferencing Nodes as required to meet conferencing demand
Pexip publishes disk images for the Pexip Infinity Management Node and Conferencing Nodes. These images may be used to launch instances of each node type as required.
This flowchart provides an overview of the basic steps involved in deploying the Pexip Infinity platform on Azure:
This topic covers:
Azure has two deployment models: Classic and Resource Manager.
Resource Manager is the recommended deployment model for new workloads and is the only model supported by Pexip Infinity.
There are three main deployment options for your Pexip Infinity platform when using the Azure cloud:
- Private cloud: all nodes are deployed within Azure. Private addressing is used for all nodes and connectivity is achieved by configuring a VPN tunnel between the corporate network and Azure. As all nodes are private, this is equivalent to an on-premises deployment which is only available to users internal to the organization.
- Public cloud: all nodes are deployed within Azure. All nodes have a private address but, in addition, public IP addresses are allocated to each node. The node's private addresses are only used for inter-node communications. Each node's public address is then configured on the relevant node as a static NAT address. Access to the nodes is permitted from the public internet, or a restricted subset of networks, as required. Any systems or endpoints that will send signaling and media traffic to those Pexip Infinity nodes must send that traffic to the public address of those nodes. If you have internal systems or endpoints communicating with those nodes, you must ensure that your local network allows such routing.
- Hybrid cloud: the Management Node, and optionally some Conferencing Nodes, are deployed in the corporate network. A VPN tunnel is created between the corporate network and Azure. Additional Conferencing Nodes are deployed in Azure and are managed from the on-premises Management Node. The Azure-hosted Conferencing Nodes can be either internally-facing, privately-addressed (private cloud) nodes; or externally-facing, publicly-addressed (public cloud) nodes; or a combination of private and public nodes (where the private nodes are in a different Pexip Infinity system location to the public nodes).
You may also want to consider dynamic bursting, where the Azure-hosted Conferencing Nodes are only started up and used when you have reached capacity on your on-premises nodes.
All of the Pexip nodes that you deploy in the cloud are completely dedicated to running the Pexip Infinity platform— you maintain full data ownership and control of those nodes.
The following limitations currently apply:
The OS username is always admin, regardless of any other username configured through the Azure Portal.
SSH keys are the preferred authentication mechanism for Pexip Infinity instances hosted in the Azure Cloud. Password-based authentication also works, however, and will use the password provisioned at instance deployment time.
- Pexip Infinity node instances only support a single SSH key pair.
- If you are using a Linux or Mac SSH client to access your instance you must use the chmod command to make sure that your private key file on your local client (SSH private keys are never uploaded) is not publicly viewable. For example, if the name of your private key file is my-key-pair.pem, use the following command: chmod 400 /path/my-key-pair.pem
See https://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-linux-ssh-from-windows/ for more information about using SSH on Azure.
- We do not support Azure deployments in China or Germany.
Azure instances come in many different sizes. In general, Pexip Infinity Conferencing Nodes should be considered compute intensive and Management Nodes reflect a more general-purpose workload. Our Server design recommendations also apply to cloud-based deployments.
For deployments of up to 20 Conferencing Nodes, we recommend using:
- Management Node: a Standard A3 instance (or Standard A4 v2 if the Standard A3 is not available).
- Transcoding Conferencing Nodes: either an F8 v2 instance (for smaller deployments) or an F16 v2 instance (for larger deployments).
- Proxying Edge Nodes: an F4 (F-series) instance.
An F8 v2 instance should provide capacity for approximately 15 HD / 37 SD / 230 audio-only calls per Transcoding Conferencing Node, and an F16 v2 instance should provide capacity for approximately 33 HD / 75 SD / 400 audio-only calls per Transcoding Conferencing Node.
If the Fv2 series is not available in your region, then you can use an F8 instance (F-series) for your Transcoding Conferencing Nodes.
By default, Azure Resource Manager virtual machine cores have a regional total limit and a regional per series
The allocated quota may be increased by opening a support ticket with Microsoft via the Azure Portal.
Within a Virtual Network, an instance's private IP addresses can initially be allocated dynamically (using DHCP) or statically. However, after the private IP address has been assigned to the instance it remains fixed and associated with that instance until the instance is terminated. The allocated IP address is displayed in the Azure portal.
Public IP addresses may be associated with an instance. Public IPs may be dynamic (allocated at launch/start time) or statically configured. Dynamic public IP addresses do not remain associated with an instance if it is stopped — and thus it will receive a new public IP address when it is next started.
Pexip Infinity nodes must always be configured with the private IP address associated with its instance, as it is used for all internal communication between nodes. To associate an instance's public IP address with the node, configure that public IP address as the node's Static NAT address (via ).
The Pexip Infinity deployment instructions assume that within Azure you have already:
- signed up for Azure and created a user account, administrator groups etc.
- decided in which Azure region/location to deploy your Pexip Infinity platform (one Management Node and one or more associated Conferencing Nodes)
- created a Resource Group, Virtual Network, and Storage Account in the chosen Azure region (see Preparing your Azure environment)
- (if necessary) configured a VPN tunnel from the corporate/management network to the Azure Virtual Network
- set your subscription's Microsoft partner ID to Pexip (see Configuring your Azure subscription)
- created a Network Security Group (see Configuring Azure Network Security Groups for port requirements)
For more information on setting up your Azure Virtual Machine environment, see https://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-linux-about/.
Pexip Infinity node instances that are hosted on Azure may be deployed across multiple Azure Virtual Networks (VNets), where each Azure VNet (and the Conferencing Nodes within it) maps onto a Pexip Infinity system location.
You must use a VNet-to-VNet VPN gateway connection; do not use VNet peering.
See https://azure.microsoft.com/en-gb/documentation/articles/vpn-gateway-howto-vnet-vnet-resource-manager-portal/ for information about how to create a connection between Azure VNets.
To deploy and manage your Pexip Infinity platform nodes see:
- Configuring your Azure subscription
- Preparing your Azure environment
- Configuring Azure Network Security Groups
- Obtaining and preparing disk images for Azure deployments
- Creating VM instances in Azure for your Pexip nodes
- Deploying a Management Node in Azure
- Deploying a Conferencing Node in Azure
- Managing Azure instances