Planning, prerequisites and firewall ports for your Microsoft Teams integration

This topic provides an overview of the Pexip Teams Connector architecture, your deployment environment options, and all certificate, network and firewall considerations and requirements.

You can then install your Teams Connector as described in Installing and configuring the Teams Connector in Azure.

Architecture overview

The Pexip Teams Connector is a Pexip application that is deployed in Microsoft Azure and is used to enable Cloud Video Interoperability (CVI) with Microsoft Teams. It handles all Teams communications and meeting requests from the Pexip Infinity platform and passes them on to the Microsoft Teams environment. The dedicated application ensures control and ownership for organizations with stringent regulatory compliance requirements.

The diagram below shows the Teams Connector components that are deployed in Azure, and how they interact with the Pexip Infinity platform and Microsoft Teams. Note that:

  • The Azure Virtual Machine scale set (VMSS) allows the Pexip application to run across a group of identical, load balanced VMs.
  • The Azure Standard Load Balancer enables the use of Azure Availability Zones, which are used by default if they are available in your selected region. It is represented twice in the diagram (performing load balancing towards Pexip Infinity, and NAT towards Teams), but it is the same single Azure resource.

You do not have to set up these Azure components individually — they are all created as part of the Teams Connector deployment process.

Teams Connector components

Pexip Infinity platform

While the Teams Connector must be deployed in Microsoft Azure, the Pexip Infinity platform can be installed in any supported environment such as on-premises or in a public or hybrid cloud (which would typically be Microsoft Azure when integrating with Microsoft Teams).

On-premises deployment

The Pexip Infinity platform can be deployed on-premises with public-facing Conferencing Nodes used to connect to the Pexip Teams Connector in Azure.

Teams Connector deployed in Azure and Infinity platform deployed on-premises

In this example deployment, external endpoints and federated systems, as well as on-premises devices can all connect to Teams conferences via the Pexip DMZ nodes.

Cloud-hosted deployment

The Pexip Infinity platform can be deployed in a dedicated public or hybrid cloud within your own cloud subscription, providing full control over your environment.

Teams Connector and Infinity platform deployed in Azure

Here, external endpoints, federated systems and on-premises devices can all connect to Teams conferences via the cloud-hosted Pexip Infinity nodes. You could use any supported cloud service but you would typically deploy your Conferencing Nodes in Microsoft Azure alongside your Pexip Teams Connector.

Including third-party call control

The Pexip Teams Connector and the Pexip Infinity platform can both be deployed in Azure with an on-premises, third-party call control system.

Teams Connector and Infinity platform deployed in Azure with third-party call control

If you have a third-party call control system that you want to retain, it can be configured to connect your on-premises systems to the cloud-hosted Pexip Infinity platform.

Pexip Infinity has a close integration with Microsoft Teams and uses Teams APIs and Microsoft SDKs to provide Infinity's interoperability features. Even though Pexip strives to maintain backwards compatibility between older versions of Pexip Infinity and the latest release of Microsoft Teams, to ensure compatibility with the latest updates to Teams we recommend that you aim to keep your Pexip Infinity deployment up-to-date with the latest Pexip Infinity software release. If, for example, you have a large Pexip deployment for non-Teams related services, and you have stringent upgrade procedures meaning that you do not always keep your Infinity software up-to-date with the latest release, you may want to consider deploying a second instance of the Pexip Infinity platform that is dedicated to your Teams interoperability requirements, and which can be managed separately and upgraded more frequently.

See Pexip Infinity installation guidelines for complete information about all of the platforms into which you can deploy the Pexip Infinity platform, and Configuring Pexip Infinity as a Microsoft Teams gateway for specific instructions about how to integrate Pexip Infinity with the Teams Connector.

Deployment and upgrade strategy

The Teams Connector deployment/upgrade process supports a blue-green deployment strategy. This approach allows you to create separate environments where one environment (blue) is running the current application version and another environment (green) is running the new application version. This means that you can test both of these deployments separately, and switch between them.

This is different from the regular deployment and upgrade strategy that involves deploying a single Teams Connector environment that is destructively replaced during the upgrade process.

We recommend using a blue-green deployment strategy, where you have two Teams Connector environments deployed in parallel, as it:

  • Provides a non-destructive upgrade path.
  • Increases application availability during the upgrade process.
  • Enables upgrade activities to be done in business hours when access to required service administrators/owners are more readily available, before a planned "switch over" window.
  • Reduces time-pressure and risk if there are delays due to Azure resources not being available.
  • Reduces deployment risk by simplifying the rollback process if a deployment fails.

See Installing the Teams Connector using a blue-green deployment/upgrade strategy for full details, and see this article for more background information on blue-green deployments.

Preparing your Azure environment, regions, instance type, and capacity planning

This section lists the various preparation steps you must perform before starting your Teams Connector installation into Azure.

Obtain an Azure subscription and an Azure tenant ID

Ensure that you have an Azure subscription and an Azure tenant ID for your Teams Connector deployment.

Note that some of the installation steps must be performed by somebody with Owner permissions for the Azure subscription (see Azure permissions requirements for more information).

Decide Azure deployment region(s) and instance type

Decide in which Azure region and with which instance type you want to deploy the Teams Connector. Large enterprises may want to install a Teams Connector in multiple regions.

You must use one of the following instance types:

  • Standard_D4s_v5 (Intel)
  • Standard_D4as_v5 (AMD)
  • Standard_F4s (previous, old default)

The type is specified via $PxAzureVmSize in the variables script and is set to Standard_D4s_v5 by default.

You should consider your VM quota, availability in the region you’re deploying to and pricing. Each instance type has the same call capacity.

The Azure region must support Automation and your nominated instance type. (See the Microsoft articles Azure automation for more information about Automation, and Azure product availability by region.)

You can use the following PowerShell script to list the current set of Azure regions that support Automation and your preferred instance type. The script currently lists where Standard_D4s_v5 is supported but you can change this (highlighted in orange in the script) to Standard_D4as_v5 or Standard_F4s as required.

Copy to clipboard
$(
  $automationLocations = ((Get-AzResourceProvider -ProviderNamespace Microsoft.Automation).ResourceTypes | Where-Object ResourceTypeName -eq automationAccounts).Locations;
  $vmLocations = ((Get-AzResourceProvider -ProviderNamespace Microsoft.Compute).ResourceTypes | Where-Object ResourceTypeName -eq locations/virtualMachines).Locations;
  $vmInstanceLocations = $vmLocations | ?{ 'Standard_D4s_v5' -in @(Get-AzVMSize -Location $_ | %{ $_.Name }) };
  Get-AzLocation | ?{ ($_.DisplayName -in $automationLocations) -and ($_.DisplayName -in $vmInstanceLocations) }
) | Select-Object -Property DisplayName,Location

Note this script does not support the Department of Defense (DoD) / Azure Government regions.

Ensure that you have sufficient resource quota and capacity for your region and instance types

By default, Azure Resource Manager virtual machine cores have a regional total limit and a regional per series limit, that are enforced per subscription. Typically, for each subscription, the default quota allows up to 10-20 CPU cores per region and 10-20 cores per series.

The allocated quota may be increased by opening a support ticket with Microsoft via the Azure Portal. Based on your capacity requirement, you should request a quota increase for your subscription. Ensure that you request a sufficient number of CPU cores. Each Teams Connector instance will use 4 vCPU of type Dsv5-series (Dasv5-series, Fs-series). Thus, for example, if 6 Teams Connector instances are required, then the quota must be increased to 4 cores x 6 Dsv5-series (Dasv5-series, Fs-series) instances = 24 CPU cores of type Dsv5-series (Dasv5-series, Fs-series). However we strongly recommend that you request a quota covering more than the minimum, such as 40 cores, to allow for an increase in the future. It may take a number of days for the quota increase request to be processed. For more information see this article.

GCC High / Azure US Government Cloud deployments

You can deploy Teams Connector in the standard Azure commercial cloud, or where necessary for specific compliance requirements, in the Azure US Government environment.

Note that you can deploy Teams Connector in Azure US Government in combination with a commercial Teams environment or a Teams GCC environment. But, for Teams GCC High environments, Teams Connector must be deployed in Azure US Government.

The Teams Connector must be hosted in the Arizona or Texas Azure regions for GCC High. The Virginia Azure region does not support the hosting of the Teams Connector for GCC High.

Most of the deployment processes are the same for either Azure environment, but there are some extra steps and minor differences when deploying in a US Government environment. Where there are variances indicated within this documentation you should either follow the standard deployment instructions (for commercial Azure) or the GCC High / Azure US Government Cloud deployment instructions.

Capacity planning

Each Teams Connector instance can handle a maximum of 16 calls, although the capacity of each instance can vary depending on various factors such as call resolution, the number of presentation streams, and the number of participants in the Teams meetings. For capacity planning purposes we recommend that you assume 15 calls per instance.

Network and certificate requirements

This diagram shows how the main elements in a standard Microsoft Teams integration communicate with each other and how the connection between each element is validated/authenticated.

Teams communications and network flow

  • You must have one or more publicly-reachable Conferencing Nodes. Those nodes:

    • can be Transcoding Conferencing Nodes or Proxying Edge Nodes
    • can have static NAT and/or dual network interfaces, as the Teams Connector is treated as a lineside connection.
  • The public-facing Conferencing Nodes always communicate with the Teams Connector via public IP, even if they are within the same Azure tenant.
  • The Teams Connector communicates with the Teams (Microsoft 365) backend via public IP; all traffic stays within the Microsoft network.
  • The Teams Connector supports connections over TLSv1.2 only, and does not support RC2, RC4, DES and 3DES ciphers.

As an alternative network configuration you can consider using private routing between the Teams Connector and Pexip Infinity — see Using private routing with the Teams Connector for more information. If you don't want to use private routing but still need more control over the Teams Connector VNET, you can deploy and use your own customized VNET — see Using a custom VNET with the Teams Connector.

In summary, the certificate usage principles are:

  • The Teams Connector and Pexip Infinity validate the connection in both directions by TLS client certificate validation. This means that every certificate's Enhanced Key Usage properties must be set for both server and client authentication.

  • Public-facing Conferencing Nodes must have a valid publicly-signed PEM-formatted certificate (typically with a .CRT or .PEM extension).
  • The Teams Connector must have a publicly-signed PFX-formatted certificate. Multiple names/certificates are required if deploying Teams Connectors in several regions.

Obtaining and preparing the TLS certificate for the Teams Connector

You must install on the Teams Connector a TLS certificate that has been signed by an external trusted CA (certificate authority).

You need to have this certificate available before you install the Teams Connector.

The certificate must be in Personal Information Exchange Format (PFX), also known as PKCS #12, which enables the transfer of certificates and their private keys from one system to another. It must use RSA keys.

  1. Decide on the FQDN (DNS name) you will use for the Teams Connector load balancer in Azure that will front the Teams Connector deployment e.g. pexip-teamsconn-eu.teams.example.com.

    • This FQDN is what you will use as:

      • the value of $PxTeamsConnFqdn in the variables initialization script
      • the certificate's subject name
      • the DNS name you will configure in Pexip Infinity (Call control > Microsoft Teams Connectors > Address of Teams Connector) later in the process.
    • It can use the same domain space as your Pexip Infinity deployment, or your Teams deployment, or it can use an altogether different domain. In all cases you always need to create the necessary DNS CNAME record(s) and public certificates for the chosen domain.
    • If you intend to deploy other Teams Connectors in other Azure regions, you will need a different DNS name for each Teams Connector and a certificate that matches that identity. You can use a single certificate for this, containing Subject Alternative Name (altNames attribute) entries for all of the regional Teams Connectors.
    • It can be a wildcard certificate, where the wildcard character ('*') is the only character of the left-most label of a DNS domain name. Note that Pexip supports RFC 6125 — this means that if you are using subdomains then, for example, a wildcard certificate of *.example.com would match foo.example.com but not bar.foo.example.com or example.com.
  2. Request a certificate for that name and generate the certificate in PFX format. Any intermediate certificates must also be in the PFX file.

You can use the Pexip Infinity Management Node to generate a certificate signing request (CSR).

You can use the Pexip Infinity Management Node to convert PEM certificates to PFX format (or vice versa), by uploading a PEM-formatted certificate and then downloading it again in PFX format. When downloading you can also include the private key and all necessary intermediate certificates in the PFX bundle.

Ensuring Conferencing Nodes have suitable certificates

The Conferencing Nodes (typically Proxying Edge Nodes) that will communicate with the Teams Connector must have TLS certificates installed that have been signed by an external trusted CA (certificate authority). If a chain of intermediate CA certificates is installed on the Management Node (to provide the chain of trust for the Conferencing Node's certificate) those intermediate certificates must not include any HTTP-to-HTTPS redirects in their AIA (Authority Information Access) section.

We recommend that you assign a "pool name" to all of the Conferencing Nodes that will communicate with the Teams Connector. The pool name should be used as a common Subject name on the certificate that is uploaded to each of those Conferencing Nodes. The certificate should also contain the individual FQDNs of each of the nodes in the pool as a Subject Alternative Name on the certificate. This pool name can then be specified on the Teams Connector (the $PxNodeFqdns variable in the initialization script) as the name of the Conferencing Nodes that it will communicate with.

This approach makes it easier to add extra Conferencing Nodes into the pool as they will all present the same certificate/subject name to the Teams Connector. If you add a new Conferencing Node with a name that is not configured on the Teams Connector you will have to redeploy the Teams Connector and specify the new names.

See Certificate and DNS examples for a Microsoft Teams integration for more information and examples about certificates, DNS records and using a "pool name" for Conferencing Nodes.

Firewall ports for the Teams Connector and NSG rules

The following table lists the ports/protocols used to carry traffic between the Teams Connector components and Teams (Microsoft 365), your public-facing Conferencing Nodes (typically Proxying Edge Nodes), the Management Node and any management networks.

Source address Source port Destination address Destination port Protocol Notes
Conferencing Nodes
Conferencing Nodes 33000–39999 *
ephemeral

Teams Connector load balancer

Teams Connector instance

443 TCP Initial call signaling is via the load balancer. When the call is established, signaling is directly between the Conferencing Node and the Teams Connector instance.
Conferencing Nodes 40000–49999 * Teams Connector instance 50000-54999 UDP Call media
Management Node
Management Node ephemeral Teams Connector Azure Event Hub 5671 AMQPS Only required if the Azure Event Hub is enabled for advanced status reporting
Teams Connector components
Teams Connector instance ephemeral Teams (Microsoft 365) <any> TCP Signaling
Teams Connector instance ephemeral Conferencing Nodes

443

4277

TCP Signaling (4277 is for calls between Microsoft Teams Rooms and VTC endpoints)
Teams Connector instance 50000-54999 Conferencing Nodes 40000–49999 * UDP Call media
Teams Connector instance 55000-59999 Teams (Microsoft 365) <any> UDP Call media
Teams Connector instance ephemeral OCSP responder 80 TCP Certificate revocation checking
Teams Connector instance ephemeral Windows update servers 80/443 TCP Windows updates
Teams Connector instance ephemeral Teams Connector Azure Event Hub 5671 AMQPS Only required if the Azure Event Hub is enabled for advanced status reporting; this port is managed by the Azure NSG
Teams (Microsoft 365)
Teams (Microsoft 365) <any> Teams Connector load balancer

10101

11000-11399

12000-12399

TCP Signaling
Teams (Microsoft 365) <any> Teams Connector instance 55000-59999 UDP Call media
Management
Management workstation <any> Teams Connector load balancer 50000-50399 TCP Only enabled for any workstation addresses specified during Teams Connector installation
Teams Connector instance 3389
Client application viewing the meeting invitation
<any> <any> Conferencing Nodes 443 TCP Access to Alternative Dial Instructions

* Configurable via the Media port range start/end, and Signaling port range start/end options (see About global settings).

† The Conferencing Nodes referenced in the InstructionUri for the "Alternate VTC dialing instructions".

Teams Connector firewall ports

Teams Connector Network Security Group (NSG)

A Network Security Group that supports these firewall requirements is created automatically in Azure as a part of the Teams Connector installation process, and is assigned to each Teams Connector instance. Note that the NSG includes:

  • Rules used for internal traffic within the Teams Connector that is forwarded from the load balancer to the instances (to ports 10100 and 20100) — these ports do not need to be opened between the Conferencing Nodes / Microsoft Teams and the Teams Connector. Similarly, the NSG allows the instances to push data to the Event Hub.
  • An "RDP" rule (priority 1000): if the $PxMgmtSrcAddrPrefixes installation variable contains addresses, this rule allows RDP access to the Teams Connector instances from those addresses. If no addresses are specified then a Deny rule is created (a full redeploy is required if you subsequently want to allow RDP access).

You may need to modify some of the NSG rules in the future if you subsequently add more Conferencing Nodes to your Pexip Infinity platform, or change the addresses of any management workstations.

Teams Connector Network Security Group

In addition to this:

  • You must allow the relevant ports through any of your own firewalls that sit between the Teams Connector components and your public-facing Conferencing Nodes and management networks.
  • If you enable advanced status reporting you must also allow the Pexip Infinity Management Node to connect out to the Azure Event Hub component. If required you can also lock down the Event Hub to only allow internet access from your Management Node IP address; access is currently unrestricted as it is secured by a connection string and the AMQPS protocol.

Additional deployment information

The following features are provided/enabled automatically as part of the deployment process:

  • VMSS (virtual machine scale set) disk encryption is enabled by default. The keys are stored in an Azure Key Vault. Note that the disk encryption can affect performance for approximately 30 minutes after the deployment has finished.
  • The Teams Connector VMs use managed identities to access the storage account in the static resource group that is used for logging, crash reports and the application ZIP.