Planning, prerequisites and firewall ports for your Microsoft Teams integration
This topic provides an overview of the Pexip Teams Connector architecture, your deployment environment options, and all certificate, network and firewall considerations and requirements.
- Architecture overview
- Deployment and upgrade strategy
- Preparing your Azure environment, regions, instance type, and capacity planning
- Network and certificate requirements
- Firewall ports for the Teams Connector and NSG rules
- Additional deployment information
You can then install your Teams Connector as described in Installing and configuring the Teams Connector in Azure.
Architecture overview
The Pexip Teams Connector is a Pexip application that is deployed in Microsoft Azure and is used to enable Cloud Video Interoperability (CVI) with Microsoft Teams. It handles all Teams communications and meeting requests from the Pexip Infinity platform and passes them on to the Microsoft Teams environment. The dedicated application ensures control and ownership for organizations with stringent regulatory compliance requirements.
The diagram below shows the Teams Connector components that are deployed in Azure, and how they interact with the Pexip Infinity platform and Microsoft Teams. Note that:
- The Azure Virtual Machine scale set (VMSS) allows the Pexip application to run across a group of identical, load balanced VMs.
- The Azure Standard Load Balancer enables the use of Azure Availability Zones, which are used by default if they are available in your selected region. It is represented twice in the diagram (performing load balancing towards Pexip Infinity, and NAT towards Teams), but it is the same single Azure resource.
You do not have to set up these Azure components individually — they are all created as part of the Teams Connector deployment process.
Pexip Infinity platform
While the Teams Connector must be deployed in Microsoft Azure, the Pexip Infinity platform can be installed in any supported environment such as on-premises or in a public or hybrid cloud (which would typically be Microsoft Azure when integrating with Microsoft Teams).
The Pexip Infinity platform can be deployed on-premises with public-facing Conferencing Nodes used to connect to the Pexip Teams Connector in Azure.
In this example deployment, external endpoints and federated systems, as well as on-premises devices can all connect to Teams conferences via the Pexip DMZ nodes.
The Pexip Infinity platform can be deployed in a dedicated public or hybrid cloud within your own cloud subscription, providing full control over your environment.
Here, external endpoints, federated systems and on-premises devices can all connect to Teams conferences via the cloud-hosted Pexip Infinity nodes. You could use any supported cloud service but you would typically deploy your Conferencing Nodes in Microsoft Azure alongside your Pexip Teams Connector.
Including third-party call control
The Pexip Teams Connector and the Pexip Infinity platform can both be deployed in Azure with an on-premises, third-party call control system.
If you have a third-party call control system that you want to retain, it can be configured to connect your on-premises systems to the cloud-hosted Pexip Infinity platform.
Pexip Infinity has a close integration with Microsoft Teams and uses Teams APIs and Microsoft SDKs to provide Infinity's interoperability features. Even though Pexip strives to maintain backwards compatibility between older versions of Pexip Infinity and the latest release of Microsoft Teams, to ensure compatibility with the latest updates to Teams we recommend that you aim to keep your Pexip Infinity deployment up-to-date with the latest Pexip Infinity software release. If, for example, you have a large Pexip deployment for non-Teams related services, and you have stringent upgrade procedures meaning that you do not always keep your Infinity software up-to-date with the latest release, you may want to consider deploying a second instance of the Pexip Infinity platform that is dedicated to your Teams interoperability requirements, and which can be managed separately and upgraded more frequently.
See Pexip Infinity installation guidelines for complete information about all of the platforms into which you can deploy the Pexip Infinity platform, and Configuring Pexip Infinity as a Microsoft Teams gateway for specific instructions about how to integrate Pexip Infinity with the Teams Connector.
Deployment and upgrade strategy
We recommend using a "blue-green" deployment strategy for the installation and future upgrade of your Teams Connector. This approach allows you to create separate environments where one environment (blue) is running the current application version and another environment (green) is running the new application version. This means that you can test both of these deployments separately, and switch between them.
This is different from a simple deployment and upgrade strategy that involves deploying a single Teams Connector environment that is destructively replaced during the upgrade process.
A blue-green deployment strategy, where you can have two Teams Connector environments deployed in parallel, has the following benefits:
- Provides a non-destructive upgrade path.
- Increases application availability during the upgrade process.
- Enables upgrade activities to be done in business hours when access to required service administrators/owners are more readily available, before a planned "switch over" window.
- Reduces time-pressure and risk if there are delays due to Azure resources not being available.
- Reduces deployment risk by simplifying the rollback process if a deployment fails.
You should also review our guidelines on testing and version compatibility during an upgrade.
Advantages over alternative strategies
An alternative upgrade strategy that also enables you to maintain a working production environment while upgrading to a new software version would be to deploy a brand new Teams Connector for every software release and then switch to using that system when it is tested and ready.
This would work but the drawbacks to this approach are that for every upgrade you would need to involve all of the service administrators/owners and have to go through all of the other installation steps every time, such as creating DNS records, possibly issuing a new certificate, obtaining Azure permissions, creating a new API app etc. Whereas with a blue-green strategy you plan ahead and set up two systems at the time of the initial deployment, and keep and re-use the same resources — you just switch back and forth between the two environments.
How a blue-green strategy works
The main difference between the blue-green deployment and upgrade strategy compared to a single deployment is that you deploy two Teams Connectors (one called "blue" and another called "green") and switch between them as you upgrade from one version of Teams Connector software to the next. Whereas with the regular deployment you would only maintain one Teams Connector which is replaced at every upgrade cycle.
Specifically, the differences between the two methods are:
- For blue-green you deploy two Teams Connectors. Each Teams Connector needs a unique DNS name and requires a TLS certificate that matches that identity. You use different names for the $PxVmssRegion variable to create specific, separate Azure resources for the two deployments (you use a separate variables script for each deployment).
- For blue-green each deployment requires its own API app. However, the two deployments can both use the same CVI app and you only need to perform one app authorization process (per tenant). All of your Teams Connectors can use the same Azure bot.
-
With a single deployment procedure:
- You simply replace your existing single deployment every time you upgrade to the next Teams Connector software version. It reuses the existing Azure resources, DNS records and so on.
- While upgrading you have a period of time when you have no Teams interop service and no ability to separately test the latest version. Typically you have to perform the entire upgrade process during "out-of-hours".
-
With the blue-green procedure:
- You have one deployment which is currently active (in "production") and one that is dormant. When you upgrade, you upgrade the dormant system, without any interference to the active system. You can then test the new system and switchover to that new system when you are happy it is functioning as expected. The previously active system then becomes dormant until the next upgrade occasion.
- You have 2 DNS records — one for the "blue" system and one for the "green" system.
- Within Pexip Infinity we suggest that you configure 2 Teams Connector systems, using one for the active/production system (and this is used by all of your Call Routing Rules) and one for the dormant/test system (that is used by a "test" Call Routing Rule).
- At each switchover you toggle the addresses of the two Teams Connector systems that are configured in the Pexip Infinity Management Node i.e. they switch between pointing at the "blue" and "green" systems.
Preparing your Azure environment, regions, instance type, and capacity planning
This section lists the various preparation steps you must perform before starting your Teams Connector installation into Azure.
Obtain an Azure subscription and an Azure tenant ID
Ensure that you have an Azure subscription and an Azure tenant ID for your Teams Connector deployment.
Note that some of the installation steps must be performed by somebody with Owner permissions for the Azure subscription (see Azure permissions requirements for more information).
Decide Azure deployment region(s) and instance type
Decide in which Azure region and with which instance type you want to deploy the Teams Connector. Large enterprises may want to install a Teams Connector in multiple regions.
You must use one of the following instance types:
- Standard_D4s_v5 (Intel)
- Standard_D4as_v5 (AMD)
- Standard_F4s (previous, old default)
The type is specified via $PxAzureVmSize in the variables script and is set to Standard_D4s_v5 by default.
You should consider your VM quota, availability in the region you’re deploying to and pricing. Each instance type has the same call capacity.
The Azure region must support Automation and your nominated instance type. (See the Microsoft articles Azure automation for more information about Automation, and Azure product availability by region.)
You can use the following PowerShell script to list the current set of Azure regions that support Automation and your preferred instance type. The script currently lists where Standard_D4s_v5 is supported but you can change this (highlighted in orange in the script) to Standard_D4as_v5 or Standard_F4s as required.
$(
$automationLocations = ((Get-AzResourceProvider -ProviderNamespace Microsoft.Automation).ResourceTypes | Where-Object ResourceTypeName -eq automationAccounts).Locations;
$vmLocations = ((Get-AzResourceProvider -ProviderNamespace Microsoft.Compute).ResourceTypes | Where-Object ResourceTypeName -eq locations/virtualMachines).Locations;
$vmInstanceLocations = $vmLocations | ?{ 'Standard_D4s_v5' -in @(Get-AzVMSize -Location $_ | %{ $_.Name }) };
Get-AzLocation | ?{ ($_.DisplayName -in $automationLocations) -and ($_.DisplayName -in $vmInstanceLocations) }
) | Select-Object -Property DisplayName,Location
Note this script does not support the Department of Defense (DoD) / Azure Government regions.
Ensure that you have sufficient resource quota and capacity for your region and instance types
By default, Azure Resource Manager virtual machine cores have a regional total limit and a regional per series limit, that are enforced per subscription. Typically, for each subscription, the default quota allows up to 10-20 CPU cores per region and 10-20 cores per series.
The allocated quota may be increased by opening a support ticket with Microsoft via the Azure Portal.
You need to be able to deploy at least one Automation account when deploying the Teams Connector. Ensure you have sufficient automation account quota available.
You can use the Azure portal to check the current usage, current limits and request for quota increase/decrease by creating a support request under the category Azure updates article for more information.
. See thisGCC High / Azure US Government Cloud deployments
You can deploy Teams Connectors in the standard Azure commercial cloud, or where necessary for specific compliance requirements, in the Azure US Government environment.
Note that you can deploy Teams Connectors in Azure US Government in combination with a commercial Teams environment or a Teams GCC environment. But, for Teams GCC High environments, Teams Connectors must be deployed in Azure US Government.
The Teams Connector must be hosted in the Arizona or Texas Azure regions for GCC High. The Virginia Azure region does not support the hosting of the Teams Connector for GCC High.
Most of the deployment processes are the same for either Azure environment, but there are some extra steps and minor differences when deploying in a US Government environment. Where there are variances indicated within this documentation you should either follow the standard deployment instructions (for commercial Azure) or the GCC High / Azure US Government Cloud deployment instructions.
Each Teams Connector instance can handle a maximum of 16 calls, although the capacity of each instance can vary depending on various factors such as call resolution, the number of presentation streams, and the number of participants in the Teams meetings. For capacity planning purposes we recommend that you assume 15 calls per instance.
- See Scheduling scaling and managing Teams Connector capacity for information about how you can control your available capacity.
- For information about the Pexip Infinity resources required to route calls to the Teams Connector, see Gateway calls to Microsoft Teams.
Network and certificate requirements
This diagram shows how the main elements in a standard Microsoft Teams integration communicate with each other and how the connection between each element is validated/authenticated.
-
You must have one or more publicly-reachable Conferencing Nodes. Those nodes:
- can be Transcoding Conferencing Nodes or Proxying Edge Nodes
- can have static NAT and/or dual network interfaces, as the Teams Connector is treated as a lineside connection.
- The public-facing Conferencing Nodes always communicate with the Teams Connector via public IP, even if they are within the same Azure tenant.
- The Teams Connector communicates with the Teams (Microsoft 365) backend via public IP; all traffic stays within the Microsoft network.
-
The Teams Connector supports connections over TLSv1.2 only, and does not support RC2, RC4, DES and 3DES ciphers.
As an alternative network configuration you can consider using private routing between the Teams Connector and Pexip Infinity — see Using private routing with the Teams Connector for more information. If you don't want to use private routing but still need more control over the Teams Connector VNET, you can deploy and use your own customized VNET — see Using a custom VNET with the Teams Connector.
In summary, the certificate usage principles are:
-
The Teams Connector and Pexip Infinity validate the connection in both directions by TLS client certificate validation. This means that every certificate's Enhanced Key Usage properties must be set for both server and client authentication.
- Public-facing Conferencing Nodes must have a valid publicly-signed PEM-formatted certificate (typically with a .CRT or .PEM extension).
- The Teams Connector must have a publicly-signed PFX-formatted certificate. Multiple names (or multiple certificates) are required when deploying Teams Connectors in a blue-green strategy, and when deploying Teams Connectors in several regions.
Obtaining and preparing the TLS certificate for the Teams Connectors
You must install on the Teams Connectors a TLS certificate that has been signed by an external trusted CA (certificate authority).
You need to have this certificate available before you install the Teams Connectors.
The certificate must be in Personal Information Exchange Format (PFX), also known as PKCS #12, which enables the transfer of certificates and their private keys from one system to another. It must use RSA keys.
-
Decide on the FQDN (DNS name) you will use for the Teams Connector load balancer in Azure that will front each Teams Connector deployment.
-
When following a blue-green strategy:
- Each Teams Connector needs a separate FQDN / DNS name, for example pexip-teamsconn-blue.teams.example.com and pexip-teamsconn-green.teams.example.com.
- You can use a single certificate for both Teams Connectors, containing Subject Alternative Name (altNames attribute) entries for the "blue" and the "green" Teams Connectors.
-
These FQDNs are what you will use as:
- The value of $PxTeamsConnFqdn in the variables initialization script (you will use a separate script per Teams Connector i.e. one for "blue" and one for "green").
- The certificate's Subject Alternative Names; you can also use any one of them as the subject name.
- The DNS names you will configure in Pexip Infinity ( ) later in the process.
- It can use the same domain space as your Pexip Infinity deployment, or your Teams deployment, or it can use an altogether different domain. In all cases you always need to create the necessary DNS A-records and public certificates for the chosen domain.
- If you intend to deploy other Teams Connectors in other Azure regions, you will also need a different DNS name for each regional pair of Teams Connectors and a certificate that matches those identities. Again, you can use a single certificate for this, containing Subject Alternative Name entries for all of the regional Teams Connectors.
- It can be a wildcard certificate, where the wildcard character ('*') is the only character of the left-most label of a DNS domain name. Note that Pexip supports RFC 6125 — this means that if you are using subdomains then, for example, a wildcard certificate of *.example.com would match foo.example.com but not bar.foo.example.com or example.com.
-
- Request a certificate for that name and generate the certificate in PFX format. Any intermediate certificates must also be in the PFX file.
You can use the Pexip Infinity Management Node to generate a certificate signing request (CSR).
You can use the Pexip Infinity Management Node to convert PEM certificates to PFX format (or vice versa), by uploading a PEM-formatted certificate and then downloading it again in PFX format. When downloading you can also include the private key and all necessary intermediate certificates in the PFX bundle.
Ensuring Conferencing Nodes have suitable certificates
The Conferencing Nodes (typically Proxying Edge Nodes) that will communicate with the Teams Connector must have TLS certificates installed that have been signed by an external trusted CA (certificate authority). If a chain of intermediate CA certificates is installed on the Management Node (to provide the chain of trust for the Conferencing Node's certificate) those intermediate certificates must not include any HTTP-to-HTTPS redirects in their AIA (Authority Information Access) section.
We recommend that you assign a "pool name" to all of the Conferencing Nodes that will communicate with the Teams Connector. The pool name should be used as a common Subject name on the certificate that is uploaded to each of those Conferencing Nodes. The certificate should also contain the individual FQDNs of each of the nodes in the pool as a Subject Alternative Name on the certificate. This pool name can then be specified on the Teams Connector (the $PxNodeFqdns variable in the initialization script) as the name of the Conferencing Nodes that it will communicate with.
This approach makes it easier to add extra Conferencing Nodes into the pool as they will all present the same certificate/subject name to the Teams Connector. If you add a new Conferencing Node with a name that is not configured on the Teams Connector you will have to redeploy the Teams Connector and specify the new names.
See Certificate and DNS examples for a Microsoft Teams integration for more information and examples about certificates, DNS records and using a "pool name" for Conferencing Nodes.
Firewall ports for the Teams Connector and NSG rules
The following table lists the ports/protocols used to carry traffic between the Teams Connector components and Teams (Microsoft 365), your public-facing Conferencing Nodes (typically Proxying Edge Nodes), the Management Node and any management networks.
Source address | Source port | Destination address | Destination port | Protocol | Notes |
---|---|---|---|---|---|
Conferencing Nodes | |||||
Conferencing Nodes | 33000–39999 ephemeral |
Teams Connector load balancer Teams Connector instance |
443 | TCP | Initial call signaling is via the load balancer. When the call is established, signaling is directly between the Conferencing Node and the Teams Connector instance. |
Conferencing Nodes | 40000–49999 |
Teams Connector instance | 50000-54999 | UDP | Call media |
Management Node | |||||
Management Node | ephemeral | Teams Connector Azure Event Hub | 5671 | AMQPS | Only required if the Azure Event Hub is enabled for advanced status reporting |
Teams Connector components | |||||
Teams Connector instance | ephemeral | Teams (Microsoft 365) | <any> | TCP | Signaling |
Teams Connector instance | ephemeral | Conferencing Nodes |
443 4277 |
TCP | Signaling (443 is required for standard CVI; 4277 is additionally required for calls between Microsoft Teams Rooms and VTC endpoints) |
Teams Connector instance | 50000-54999 | Conferencing Nodes | 40000–49999 |
UDP | Call media |
Teams Connector instance | 55000-59999 | Teams (Microsoft 365) | <any> | UDP | Call media |
Teams Connector instance | ephemeral | OCSP responder | 80 | TCP | Certificate revocation checking |
Teams Connector instance | ephemeral | Windows update servers | 80/443 | TCP | Windows updates |
Teams Connector instance | ephemeral | Teams Connector Azure Event Hub | 5671 | AMQPS | Only required if the Azure Event Hub is enabled for advanced status reporting; this port is managed by the Azure NSG |
Teams (Microsoft 365) | |||||
Teams (Microsoft 365) | <any> | Teams Connector load balancer |
10101 11000-11399 12000-12399 |
TCP | Signaling |
Teams (Microsoft 365) | <any> | Teams Connector instance | 55000-59999 | UDP | Call media |
Management | |||||
Management workstation | <any> | Teams Connector load balancer | 50000-50399 | TCP | Only enabled for any workstation addresses specified during Teams Connector installation |
Teams Connector instance | 3389 | ||||
Client application viewing the meeting invitation | |||||
<any> | <any> | Conferencing Nodes |
443 | TCP | Access to Alternative Dial Instructions |
|
Teams Connector Network Security Group (NSG)
A Network Security Group that supports these firewall requirements is created automatically in Azure as a part of the Teams Connector installation process, and is assigned to each Teams Connector instance. Note that the NSG includes:
- Rules used for internal traffic within the Teams Connector that is forwarded from the load balancer to the instances (to ports 10100 and 20100) — these ports do not need to be opened between the Conferencing Nodes / Microsoft Teams and the Teams Connector. Similarly, the NSG allows the instances to push data to the Event Hub.
- An "RDP" rule (priority 1000): if the $PxMgmtSrcAddrPrefixes installation variable contains addresses, this rule allows RDP access to the Teams Connector instances from those addresses. If no addresses are specified then a Deny rule is created (a full redeploy is required if you subsequently want to allow RDP access).
You may need to modify some of the NSG rules in the future if you subsequently add more Conferencing Nodes to your Pexip Infinity platform, or change the addresses of any management workstations. Contact your Pexip authorized support representative if you need to modify any of the other NSG rules.
In addition to this:
- You must allow the relevant ports through any of your own firewalls that sit between the Teams Connector components and your public-facing Conferencing Nodes and management networks.
- If you enable advanced status reporting you must also allow the Pexip Infinity Management Node to connect out to the Azure Event Hub component. If required you can also lock down the Event Hub to only allow internet access from your Management Node IP address; access is currently unrestricted as it is secured by a connection string and the AMQPS protocol.
Additional deployment information
The following features are provided/enabled automatically as part of the deployment process:
- VMSS (virtual machine scale set) disk encryption is enabled by default. The keys are stored in an Azure Key Vault. Note that the disk encryption can affect performance for approximately 30 minutes after the deployment has finished.
- The Teams Connector VMs use managed identities to access the storage account in the static resource group that is used for logging, crash reports and the application ZIP.