Deploying a Conferencing Node on a KVM host

To deploy a new Conferencing Node onto a KVM host, you must:

  1. Use the Pexip Infinity Administrator interface to generate and download the .vmdk image.
  2. Convert the .vmdk image for use with KVM.
  3. Create a new volume on your KVM server and upload the disk image.
  4. Create the Conferencing Node virtual machine.

    Note that we use the libvirt command line tools to perform the import as they provide greater control than Virtual Machine Manager.

  5. Enable the virtual machine for automatic startup.

These steps are described in detail below.

Note that:

  • This file is specific to the Conferencing Node being deployed. It cannot be used to deploy multiple Conferencing Nodes.
  • The file is single-use. It cannot be used to re-deploy the same Conferencing Node at a later date. To re-deploy the Conferencing Node, you must first delete it from the Pexip Infinity Management Node and from VMware, and then deploy a new Conferencing Node with the same configuration as the deleted node.
  • Before you start, ensure that you are currently using the same machine that you will subsequently use to upload the generated file on to your host server.

Generate and download the .vmdk image

  1. From the Pexip Infinity Administrator interface, go to Platform > Conferencing Nodes and select Add Conferencing Node.
  2. You are now asked to provide the network configuration to be applied to the Conferencing Node, by completing the following fields:

    Option Description
    Name Enter the name to use when referring to this Conferencing Node in the Pexip Infinity Administrator interface.
    Description An optional field where you can provide more information about the Conferencing Node.
    Role

    This determines the Conferencing Node's role:

    • Proxying Edge Node: a Proxying Edge Node handles all media and signaling connections with an endpoint or external device, but does not host any conferences — instead it forwards the media on to a Transcoding Conferencing Node for processing.
    • Transcoding Conferencing Node: a Transcoding Conferencing Node handles all the media processing, protocol interworking, mixing and so on that is required in hosting Pexip Infinity calls and conferences. When combined with Proxying Edge Nodes, a transcoding node typically only processes the media forwarded on to it by those proxying nodes and has no direct connection with endpoints or external devices. However, a transcoding node can still receive and process the signaling and media directly from an endpoint or external device if required.

    See Distributed Proxying Edge Nodes for more information.

    Hostname

    Domain

    Enter the hostname and domain to assign to this Conferencing Node. Each Conferencing Node and Management Node must have a unique hostname.

    The Hostname and Domain together make up the Conferencing Node's DNS name or FQDN. We recommend that you assign valid DNS names to all your Conferencing Nodes. For more information, see Assigning hostnames and FQDNs.

    IPv4 address

    Enter the IP address to assign to this Conferencing Node when it is created.

    Network mask

    Enter the IP network mask to assign to this Conferencing Node.

    Note that IPv4 address and Network mask apply to the eth0 interface.

    Gateway IPv4 address

    Enter the IP address of the default gateway to assign to this Conferencing Node.

    Note that the Gateway IPv4 address is not directly associated with a network interface, except that the address entered here lies in the subnet in which either eth0 or eth1 is configured to use. Thus, if the gateway address lies in the subnet in which eth0 lives, then the gateway will be assigned to eth0, and likewise for eth1.

    Secondary interface IPv4 address

    The optional secondary interface IPv4 address for this Conferencing Node. If configured, this interface is used for signaling and media communications to clients, and the primary interface is used for communication with the Management Node and other Conferencing Nodes. For more information, see Conferencing Nodes with dual network interfaces (NICs).

    Secondary interface network mask

    The optional secondary interface network mask for this Conferencing Node.

    Note that Secondary interface IPv4 address and Secondary interface network mask apply to the eth1 interface.

    System location

    Select the physical location of this Conferencing Node. A system location should not contain a mixture of proxying nodes and transcoding nodes.

    If the system location does not already exist, you can create a new one here by clicking to the right of the field. This will open up a new window showing the Add system location page. For further information see About system locations.

    Configured FQDN A unique identity for this Conferencing Node, used in signaling SIP TLS Contact addresses. For more information, see Assigning a Configured FQDN.
    TLS certificate The TLS certificate to use on this node. This must be a certificate that contains the above Configured FQDN. Each certificate is shown in the format <subject name> (<issuer>).
    IPv6 address

    The IPv6 address for this Conferencing Node. Each Conferencing Node must have a unique IPv6 address.

    Gateway IPv6 address

    The IPv6 address of the default gateway.

    If this is left blank, the Conferencing Node listens for IPv6 Router Advertisements to obtain a gateway address.

    IPv4 static NAT address

    The public IPv4 address used by this Conferencing Node when it is located behind a NAT device. Note that if you are using NAT, you must also configure your NAT device to route the Conferencing Node's IPv4 static NAT address to its IPv4 address.

    For more information, see Configuring Pexip Infinity nodes to work behind a static NAT device.

    Static routes From the list of Available Static routes, select the routes to assign to the node, and then use the right arrow to move the selected routes into the Chosen Static routes list. For more information, see Managing static routes.
    Enable distributed database

    This should usually be enabled (checked) for all Conferencing Nodes that are expected to be "always on", and disabled (unchecked) for nodes that are expected to only be powered on some of the time (e.g. cloud bursting nodes that are likely to only be operational during peak times).

    Enable SSH

    Determines whether this node can be accessed over SSH.

    Use Global SSH setting: SSH access to this node is determined by the global Enable SSH setting (Platform > Global settings > Connectivity > Enable SSH).

    Off: this node cannot be accessed over SSH, regardless of the global Enable SSH setting.

    On: this node can be accessed over SSH, regardless of the global Enable SSH setting.

    Default: Use Global SSH setting.

    SSH authorized keys

    You can optionally assign one or more SSH authorized keys to use for SSH access.

    From the list of Available SSH authorized keys, select the keys to assign to the node, and then use the right arrow to move the selected keys into the Chosen SSH authorized keys list.

    Note that in cloud environments, this list does not include any of the SSH keys configured within that cloud service.

    For more information, see Configuring SSH authorized keys.

    Use SSH authorized keys from cloud service

    When a node is deployed in a cloud environment, you can continue to use the SSH keys configured within the cloud service where available, in addition to any of your own assigned keys (as configured in the field above). If you disable this option you can only use your own assigned keys.

    Default: enabled.

  3. Select Save.
  4. You are now asked to complete the following fields:

    Option Description
    Deployment type

    Select Manual (KVM).

    SSH password

    Enter the password to use when logging in to this Conferencing Node's Linux operating system over SSH. The username is always admin.

    Logging in to the operating system is required when changing passwords or for diagnostic purposes only, and should generally be done under the guidance of your Pexip authorized support representative. In particular, do not change any configuration using SSH — all changes should be made using the Pexip Infinity Administrator interface.

  5. Select Download.

    A message appears at the top of the page: "The Conferencing Node image will download shortly or click on the following link".

    After a short while, a file with the name pexip-<hostname>.<domain>.vmdk is generated and downloaded.

    Note that the generated file is only available for your current session so you should download it immediately.

Convert the .vmdk image for use with KVM

To use the Conferencing Node VMDK image file with KVM, you must convert it to raw format:

  1. Copy the downloaded VMDK file (named pexip-<hostname>.<domain>.vmdk) to the server running KVM.
  2. Convert the disk image from VMDK to raw, using the command:

    qemu-img convert -O raw <downloaded filename> pexip-disk01.raw

    (This conversion process can take several seconds.)

Create a new volume and upload the disk image

Next, you create a new volume on your KVM server and upload the converted disk image. From within your KVM environment:

  1. Use virsh to create a new volume on your KVM server:

    virsh --connect qemu://<hostname>/system vol-create-as <poolname> <volume_name> 49G --format raw

    where:

    <hostname> is the hostname of your KVM server. Note that you can omit the <hostname> if you are running virsh commands on the local server i.e. you can use virsh --connect qemu:///system.

    <poolname> is the name of the storage pool in which to create the volume; typically you would use default. (To determine the storage pools available on the target system, use virsh --connect qemu://<hostname>/system pool-list.)

    <volume_name> is the name of your new volume.

    49G is the virtual size of the volume; always use 49G for a Conferencing Node.

    For example:

    virsh --connect qemu://host1.example.com/system vol-create-as default pexip-conf-01 49G --format raw

    This example creates a volume named pexip-conf-01 of size 49 GB and format raw in the storage pool named default.

  2. Upload the converted disk image to the newly created volume:

    virsh --connect qemu://<hostname>/system vol-upload <volume_name> pexip-disk01.raw --pool <poolname>

    For example:

    virsh --connect qemu://host1.example.com/system vol-upload pexip-conf-01 pexip-disk01.raw --pool default

    This example uploads the pexip-disk01.raw image to the newly created volume, pexip-conf-01, in the storage pool named default.

Create the virtual machine

After the disk image has been uploaded, you can create the virtual machine to use it.

Note that we use the libvirt command line tools to perform the import as they provide greater control than Virtual Machine Manager.

  1. Identify the filesystem path of the newly uploaded disk image:

    virsh --connect qemu://<hostname>/system vol-path <volume_name> --pool <poolname>

    For example:

    virsh --connect qemu://host1.example.com/system vol-path pexip-conf-01 --pool default

    This prints out the absolute path to the disk image file, for example:

    /var/lib/libvirt/images/pexip-conf-01

    This path is used in the disk path parameter in the next step.

  2. Use the virt-install command line tool to create the virtual machine:

    virt-install \
      --import \
      --hvm \
      --name=<vm_name> \
      --arch=x86_64 \
      --vcpus=4 \
      --ram=4096 \
      --cpu host \
      --os-type=linux \
      --connect=qemu://<hostname>/system \
      --virt-type kvm \
      --disk path=<image_file_path>,bus=virtio,format=raw,cache=none,io=native \
      --network bridge=br0,model=virtio \
      --memballoon virtio \
      --graphics vnc,listen=0.0.0.0,password=<password>

    This creates a new VM (KVM domain) from the converted disk image.

    The command options are described below (items in bold may be changed as necessary):

    Option Description
    --import Build guest domain around pre-installed disk image; do not attempt to install a new OS.
    --hvm Create a fully virtualized (i.e. not paravirtualized) VM.
    --name=<vm_name> Name of the new VM, where <vm_name> is, for example, pexip-conf01-vm.
    --arch=x86_64 CPU architecture of new VM (must be x84_64).
    --vcpus=4 Number of CPUs allocated to new VM. By default, this is 4 for the Conferencing Node.
    --ram=4096 Memory allocated to new VM (in megabytes).
    --cpu host Expose all host CPU capabilities to new VM (CPUID).
    --os-type=linux The guest OS is Linux.
    --connect=qemu://<hostname>/system Connect to KVM on the target system, where <hostname> is the hostname of your KVM server.
    --virt-type kvm Use KVM to host the new VM.
    --disk path=<image_file_path>,
    bus=virtio,format=raw,cache=none,io=native
    • Define the location of the disk image file, where <image_file_path> is as determined in the previous step, for example /var/lib/libvirt/images/pexip-conf-01.
    • Expose it to the guest on the virtio paravirtualized bus (as opposed to IDE/SCSI).
    • Define the image file as being in raw format.
    • Instruct the host system not to cache the disk contents in memory.
    • Use the native IO backend to access the disk device.
    --network bridge=br0,model=virtio
    • Create a network interface connected to the br0 bridge interface on the host.
    • Expose it to the guest as a virtio paravirtualized NIC.
    --memballoon virtio Expose the virtio memory balloon to the guest.
    --graphics vnc,listen=0.0.0.0,
    password=<password>
    Expose the graphical console over VNC, listening on 0.0.0.0 (i.e. all addresses on the target system) and with an access password of <password>.

    You may receive a warning "Unable to connect to graphical console: virt-viewer not installed"; if so, this message can be safely ignored.

    After the VM has been created, it may be managed using the Virtual Machine Manager desktop interface (virt-manager application) or via the command line interface (virsh).

    The new node should start automatically. If it does not you can use the Virtual Machine Manager to start the node, or the CLI command:

    virsh --connect qemu://<hostname>/system start <vm_name>

    Note that you can list existing VMs by using virsh --connect qemu://<hostname>/system list

After deploying a new Conferencing Node, it takes approximately 5 minutes before the node is available for conference hosting and for its status to be updated on the Management Node. Until it becomes available, the Management Node reports the status of the Conferencing Node as having a last contacted and last updated date of "Never". "Connectivity lost between nodes" alarms relating to that node may also appear temporarily.

Enabling automatic startup

After deploying a new Conferencing Node in KVM, you should enable automatic startup of that virtual machine (VM). In KVM, automatic startup is disabled by default for every new VM. This means that if the host server is powered down for any reason, when it restarts the VM will not restart and must be started manually.

You can only enable automatic startup after the Conferencing Node has been deployed.

To enable automatic startup using Virtual Machine Manager:

  1. Connect to the Virtual Machine Manager (virt-manager) that is managing the node's VM.
  2. Select the node's VM and then, from the toolbar, select the Show the virtual machine console and details icon .

    A new window for that VM is opened.

  3. If necessary, select View > Details to display the VM information.
  4. From the sidebar menu, select Boot Options.
  5. Select the Start virtual machine on host boot up check box.
  6. Select Apply.

Next steps

See Testing and next steps after initial installation.