Deploying Pexip Infinity on Google Cloud Platform (GCP) Virtual Machines
You can use GCP to launch as many or as few virtual servers as you need, and use those virtual servers to host a Pexip Infinity Management Node and as many Conferencing Nodes as required for your Pexip Infinity platform.
GCP enables you to scale up or down to handle changes in requirements or spikes in conferencing requirements. You can also use the GCP APIs and the Pexip Infinity management API to monitor usage and bring up / tear down Conferencing Nodes as required to meet conferencing demand, or allow Pexip Infinity to handle this automatically for you via its dynamic bursting capabilities.
Pexip publishes disk images for the Pexip Infinity Management Node and Conferencing Nodes. These images may be used to launch instances of each node type as required.
This flowchart provides an overview of the basic steps involved in deploying the Pexip Infinity platform on GCP:
This topic covers:
There are three main deployment options for your Pexip Infinity platform when using the Google Cloud Platform:
- Private cloud: all nodes are deployed within Google Cloud Platform. Private addressing is used for all nodes and connectivity is achieved by configuring a VPN tunnel between the corporate network and GCP. As all nodes are private, this is equivalent to an on-premises deployment which is only available to users internal to the organization.
- Public cloud: all nodes are deployed within GCP. All nodes have a private address but, in addition, public IP addresses are allocated to each node. The node's private addresses are only used for inter-node communications. Each node's public address is then configured on the relevant node as a static NAT address. Access to the nodes is permitted from the public internet, or a restricted subset of networks, as required. Any systems or endpoints that will send signaling and media traffic to those Pexip Infinity nodes must send that traffic to the public address of those nodes. If you have internal systems or endpoints communicating with those nodes, you must ensure that your local network allows such routing.
- Hybrid cloud: the Management Node, and optionally some Conferencing Nodes, are deployed in the corporate network. A VPN tunnel is created between the corporate network and GCP. Additional Conferencing Nodes are deployed in GCP and are managed from the on-premises Management Node. The GCP-hosted Conferencing Nodes can be either internally-facing, privately-addressed (private cloud) nodes; or externally-facing, publicly-addressed (public cloud) nodes; or a combination of private and public nodes (where the private nodes are in a different Pexip Infinity system location to the public nodes). You may also want to consider dynamic bursting, where the GCP-hosted Conferencing Nodes are only started up and used when you have reached capacity on your on-premises nodes.
All of the Pexip nodes that you deploy in the cloud are completely dedicated to running the Pexip Infinity platform— you maintain full data ownership and control of those nodes.
GCP instances come in many different sizes. In general, Pexip Infinity Conferencing Nodes should be considered compute intensive and Management Nodes reflect a more general-purpose workload. Our Server design recommendations also apply to cloud-based deployments.
For deployments of up to 20 Conferencing Nodes, we recommend using:
- Management Node: a machine type with 4 vCPUs (n1-standard-4) or larger.
- Transcoding Conferencing Nodes: a machine type with 8 vCPUs 7.2 GB memory (n1-highcpu-8) or larger.
- Proxying Edge Nodes: a machine type with 4 vCPUs (n1-highcpu-4).
This should provide capacity for approximately 11 HD / 27 SD / 140 audio-only calls per Transcoding Conferencing Node.
For all available machine types see: https://cloud.google.com/compute/pricing#predefined_machine_types.
An SSH key must be applied to the VM instance that will host the Management Node (in order to complete the installation) and we also recommend applying SSH keys to your VM instances that will host your Conferencing Nodes. Keys can be applied project wide or for a particular VM instance. If a key is applied after the VM instance has been created then the instance must be rebooted for the key to take effect.
The username element of the SSH key must be "admin" or "admin@<domain>" i.e. the key takes the format:
ssh-rsa [KEY_VALUE] admin or
ssh-rsa [KEY_VALUE] firstname.lastname@example.org for example.
You can create key pairs with third-party tools such as PuTTYgen, or you can use an existing SSH key pair but you will need to format the public key to work in Compute Engine metadata (and ensure the username is modified to "admin"). For more information about using and formatting SSH keys for GCP, see https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys.
For a private or hybrid cloud deployment, you must configure the Google Cloud virtual private network (VPN) to connect your on-premises network to the Google Cloud Platform.
For full information about how to configure the Google Cloud VPN, see https://cloud.google.com/compute/docs/vpn/overview.
All GCE VM instances are allocated a Primary internal IP (i.e. private) address. You can optionally also assign a static External IP (i.e. public) address to a GCE VM instance. You should assign a public address to all nodes in a public cloud deployment, and to any externally-facing nodes in a hybrid deployment, that you want to be accessible to conference participants located in the public internet.
Pexip Infinity nodes must always be configured with the private IP address associated with its instance, as it is used for all internal communication between nodes. To associate an instance's public IP address with the node, configure that public IP address as the node's Static NAT address (via ).
The private IP address should be used as the Conferencing Node address by users and systems connecting to conferences from the corporate network (via the Google Cloud VPN) in a private or hybrid cloud deployment. When an instance has been assigned an external IP address and that address is configured on the Conferencing Node as its Static Nat address, all conference participants must use that external address to access conferencing services on that node.
The deployment instructions assume that within GCP you have already:
- signed up to the Google Cloud Platform
- configured a Google Cloud VPN (for a private or hybrid cloud deployment)
For more information on setting up your Google Cloud Platform Virtual Machines, see https://cloud.google.com/compute/docs/instances/.
To deploy and manage your Pexip Infinity platform nodes see:
- Configuring your Google VPC network
- Obtaining and preparing disk images for GCE Virtual Machines
- Deploying a Management Node in Google Cloud Platform
- Initial platform configuration — GCP
- Deploying a Conferencing Node in Google Cloud Platform
- Configuring dynamic bursting to the Google Cloud Platform (GCP)
- Managing Google Compute Engine VM instances