You are here: Administration > Conferencing Nodes > Manual Xen deployments

Manually deploying a Conferencing Node on a Xen host

To manually deploy a new Conferencing Node onto a Xen host, you must:

  1. Use the Pexip Infinity Administrator interface to generate and download the .ova image.
  2. Convert the .ova image for use with Xen.
  3. Create a new volume on your Xen server and upload the disk image.
  4. Create the Conferencing Node virtual machine.

    Note that we use the libvirt command line tools to perform the import as they provide greater control than Virtual Machine Manager.

  5. Enable the virtual machine for automatic startup.

These steps are described in detail below.

Generate and download the .ova image

Use the Pexip Infinity Administrator interface to generate and download the .ova image:

  1. Go to Platform configuration > Conferencing Nodes and select Add Conferencing Node.
  2. From the Deployment type field, select Manual (Xen).
  3. Select Next.
  4. You are now asked to provide information regarding the CPUs and memory of the Conferencing Node VM, by completing the following fields:

    Option Description
    Number of virtual CPUs to assign Enter the number of CPUs to assign to the Conferencing Node. We do not recommend that you assign more virtual CPUs than there are physical cores on a single processor on the host server. For example, if the host server has 2 processors each with 8 physical cores, we recommend that you assign no more than 8 virtual CPUs.
    System memory (in megabytes) to assign

    Enter the amount of RAM (in megabytes) to assign to the Conferencing Node. The number entered must be a multiple of 4.

    We recommend 1024 MB (1 GB) RAM for each CPU.

  5. Select Next.
  6. You are now asked to provide the network configuration to be applied to the Conferencing Node, by completing the following fields:

    Option Description
    Name Enter the name that will be used to refer to this Conferencing Node in the Pexip Infinity Administrator interface.
    Description An optional field where you can provide more information about the Conferencing Node.
    IPv4 address Enter the IP address to be assigned to this Conferencing Node when it is created.
    Network mask Enter the IP network mask to be assigned to this Conferencing Node.
    Gateway IPv4 address Enter the IP address of the default gateway to be assigned to this Conferencing Node.
    Hostname

    Domain

    Enter the hostname and domain to be assigned to this Conferencing Node. Each Conferencing Node and Management Node must have a unique hostname.

    The Hostname and Domain together make up the Conferencing Node's DNS name or FQDN. We recommend you assign valid DNS names to all your Conferencing Nodes. For more information, see Assigning hostnames and FQDNs.

    System location

    Select the physical location of this Conferencing Node.

    If the system location does not already exist, you can create a new one here by clicking to the right of the field. This will open up a new window showing the Add system location page. For further information see About system locations.

    SIP TLS FQDN A unique identity for this Conferencing Node, used in signaling SIP TLS Contact addresses. For more information, see SIP TLS FQDN.
    TLS certificate The TLS certificate to use on this node. This must be a certificate that contains the above SIP TLS FQDN.
    IPv6 address The IPv6 address for this Conferencing Node. Each Conferencing Node must have a unique IPv6 address.
    Gateway IPv6 address The IPv6 address of the default gateway.
    IPv4 static NAT address

    The public IPv4 address used by this Conferencing Node when it is located behind a NAT device. Note that if you are using NAT, you must also configure your NAT device to route the Conferencing Node's IPv4 static NAT address to its IPv4 address.

    For more information, see Configuring Pexip Infinity nodes to work behind a static NAT device.

    Static routes From the list of Available Static routes, select the routes to assign to the node, and then use the right arrow to move the selected routes into the Chosen Static routes list. For more information, see Managing static routes.
    SSH password

    Enter the password to be used when logging in to this Conferencing Node's Linux operating system over SSH. The username will always be admin.

    Logging in to the operating system is required when changing passwords or for diagnostic purposes only, and should generally be done under the guidance of your Pexip authorized support representative. In particular, do not change any configuration using SSH — all changes should be made using the Pexip Infinity Administrator interface.

  7. Select Finish.

    You are taken to the Manually Deploy Conferencing Node page.

  8. Select Download Conferencing Node.

    This downloads a zip file with the name pexip-<hostname>.<domain>.ova.

Convert the .ova image for use with Xen

To use the Conferencing Node OVA image file with Xen, you must convert it from VMDK to raw format:

  1. Copy the downloaded OVA file (named pexip-<hostname>.<domain>.ova) to the server running Xen.
  2. Unpack the .ova image, using the command:

    tar xf pexip-<hostname>.<domain>.ova

    This unpacks a set of files including pexip-disk01.vmdk.

  3. Convert the disk image from VMDK to raw, using the command:

    qemu-img convert -O raw pexip-disk01.vmdk pexip-disk01.raw

    (This conversion process can take several seconds.)

Create a new volume and upload the disk image

Next, you create a new volume on your Xen server and upload the converted disk image. From within your Xen environment:

  1. Use virsh to create a new volume on your Xen server:

    virsh --connect xen://<hostname>/ vol-create-as <poolname> <volume_name> 49G --format raw

    where:

    <hostname> is the hostname of your Xen server. Note that you can omit the <hostname> if you are running virsh commands on the local server i.e. you can use virsh --connect xen:/// .

    <poolname> is the name of the storage pool in which to create the volume; typically you would use default. (To determine the storage pools available on the target system, use virsh --connect xen://<hostname>/ pool-list.)

    <volume_name> is the name of your new volume.

    49G is the virtual size of the volume; always use 49G for a Conferencing Node.

    For example:

    virsh --connect xen://host1.example.com/ vol-create-as default pexip-conf-01 49G --format raw

    This example creates a volume named pexip-conf-01 of size 49 GB and format raw in the storage pool named default.

  2. Upload the converted disk image to the newly created volume:

    virsh --connect xen://<hostname>/ vol-upload <volume_name> pexip-disk01.raw --pool <poolname>

    For example:

    virsh --connect xen://host1.example.com/ vol-upload pexip-conf-01 pexip-disk01.raw --pool default

    This example uploads the pexip-disk01.raw image to the newly created volume, pexip-conf-01, in the storage pool named default.

Create the virtual machine

After the disk image has been uploaded, you can create the virtual machine to use it.

Note that we use the libvirt command line tools to perform the import as they provide greater control than Virtual Machine Manager.

  1. Identify the filesystem path of the newly uploaded disk image:

    virsh --connect xen://<hostname>/ vol-path <volume_name> --pool <poolname>

    For example:

    virsh --connect xen://host1.example.com/ vol-path pexip-conf-01 --pool default

    This prints out the absolute path to the disk image file, for example:

    /var/lib/libvirt/images/pexip-conf-01

    This path is used in the disk path parameter in the next step.

  2. Use the virt-install command line tool to create the virtual machine:

    virt-install \
      --import \
      --hvm \
      --name=<vm_name> \
      --arch=x86_64 \
      --vcpus=4 \
      --ram=4096 \
      --os-type=linux \
      --connect=xen://<hostname>/ \
      --virt-type xen \
      --disk path=<image_file_path>,bus=xen,format=raw,driver_name=qemu,cache=none,io=native \
      --network bridge=xenbr0,model=e1000 \
      --memballoon xen \
      --graphics vnc,listen=0.0.0.0,password=<password>

    This creates a new VM (Xen domain) from the converted disk image.

    The meaning of the various options are as follows (items in bold may be changed as necessary):

    Option Description
    --import Build guest domain around pre-installed disk image; do not attempt to install a new OS.
    --hvm Create a fully virtualized (i.e. not paravirtualized) VM.
    --name=<vm_name> Name of the new VM, where <vm_name> is, for example, pexip-conf01-vm.
    --arch=x86_64 CPU architecture of new VM (must be x84_64).
    --vcpus=4 Number of CPUs allocated to new VM. By default, this is 4 for the Conferencing Node.
    --ram=4096 Memory allocated to new VM (in megabytes).
    --os-type=linux The guest OS is Linux.
    --connect=xen://<hostname>/ Connect to Xen on the target system, where <hostname> is the hostname of your Xen server.
    --virt-type xen Use Xen to host the new VM.
    --disk path=<image_file_path>,
    bus=xen,format=raw,
    driver_name=qemu
    ,cache=none,io=native
    • Define the location of the disk image file, where <image_file_path> is as determined in the previous step, for example /var/lib/libvirt/images/pexip-conf-01.
    • Expose it to the guest on the xen paravirtualized bus (as opposed to IDE/SCSI).
    • Define the image file as being in raw format and the driver as qemu.
    • Instruct the host system not to cache the disk contents in memory.
    • Use the native IO backend to access the disk device.
    --network bridge=xenbr0,model=e1000
    • Create a network interface connected to the xenbr0 bridge interface on the host.
    • Expose it to the guest as an e1000 NIC.
    --memballoon xen Expose the xen memory balloon to the guest.
    --graphics vnc,listen=0.0.0.0,
    password=<password>
    Expose the graphical console over VNC, listening on 0.0.0.0 (i.e. all addresses on the target system) and with an access password of <password>.

    You may receive a warning "Unable to connect to graphical console: virt-viewer not installed"; if so, this message can be safely ignored.

    After the VM has been created, it may be managed using the Virtual Machine Manager desktop interface (virt-manager application) or via the command line interface (virsh).

    The new node should start automatically. If it does not you can use the Virtual Machine Manager to start the node, or the CLI command:

    virsh --connect xen://<hostname>/ start <vm_name>

    Note that you can list existing VMs by using virsh --connect xen://<hostname>/ list

After deploying a new Conferencing Node, it takes approximately 5 minutes before the node is available for conference hosting and for its status to be updated on the Management Node. (Until it is available, the Management Node will report the status of the Conferencing Node as having a last contacted and last updated date of "Never".)

Enabling automatic startup

After deploying a new Conferencing Node in Xen, you should enable automatic startup of that virtual machine (VM). In Xen, automatic startup is disabled by default for every new VM. This means that if the host server is powered down for any reason, when it restarts the VM will not restart and must be started manually.

You can only enable automatic startup after the Conferencing Node has been deployed.

To enable automatic startup using Virtual Machine Manager:

  1. Connect to the Virtual Machine Manager (virt-manager) that is managing the node's VM.
  2. Select the node's VM and then, from the toolbar, select the Show the virtual machine console and details icon .

    A new window for that VM is opened.

  3. If necessary, select View > Details to display the VM information.
  4. From the sidebar menu, select Boot Options.
  5. Select the Start virtual machine on host boot up check box.
  6. Select Apply.

Next steps

Enabling SNMP on Conferencing Nodes