Installing the Xen hypervisor and Kernel

Installing the Xen package

The Xen packages are arch masked for both x86 and x86_64 platforms at the moment so before they can be installed they will need to be added to the keyword list. This can be easily accomplished using the following commands.

lisa echo "app-emulation/xen" >> /etc/portage/package.keywords
lisa echo "app-emulation/xen-tools" >> /etc/portage/package.keywords

Before we install the Xen hypervisor and associated management tools it is a good idea to check that the correct use-flags have been selected. A pretend merge, as shown in the example below, will show the available and selected use-flags so that we can be sure they are correct.

lisa emerge -pv xen xen-tools
 
These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild      ] net-misc/bridge-utils-1.4
[ebuild      ] dev-python/pyxml-0.8.4-r2  USE="-doc -examples"
[ebuild      ] sys-apps/iproute2-2.6.31  USE="-atm -berkdb -minimal"
[ebuild      ] app-emulation/xen-tools-4.0.1  USE="-acm -api -custom-cflags -debug -doc -flask -hvm -ioemu -pygrub -screen"
[ebuild      ] app-emulation/xen-4.0.1  USE="pae -acm -custom-cflags -debug -flask -xsm"
Warning:
If you intend to run a 32-bit kernel, as either the host or a guest, it is extremely important that the app-emulation/xen package is built with the pae use-flag set, on both 32-bit and 64-bit systems, as recent 32-bit kernels will not boot on a system without pae support.
 

Once we are satisfied that the correct use-flags are enabled we can install the Xen hypervisor and associated management tools using the emerge command below.

lisa emerge xen xen-tools

Installing a Xen compatible kernel

Assuming that the Xen virtual machine monitor and management tools built and installed correctly we can continue the installation process by installing a Xen compatible kernel. As of the date of writing domain 0 support has yet to be merged into the main-line kernel so a special patched version of the kernel is required. The sources for this kernel are also arch masked for all compatible architectures and will need to be added to the keyword list before installing as shown below.

lisa echo "sys-kernel/xen-sources" >> /etc/portage/package.keywords
lisa emerge xen-sources

Building a Kernel for Domain 0

Now that we have installed the kernel sources for a Xen domain 0 compatible kernel we can import our existing kernel configuration to simplify the task of configuring the new kernel.

lisa cd /usr/src
lisa src cp linux/.config linux-2.6.34-xen-r4/
lisa src ln -sf linux-2.6.34-xen-r4 linux
lisa src cd linux
lisa linux make oldconfig
lisa linux make menuconfig
Information:
As the current Xen-patched version of the Linux kernel is based on version 2.6.34 there may be a number of configuration options which are unavailable. If you have a working kernel configuration from a version of the kernel which is closer than that which is currently pointed to by the linux symbolic link use that instead. As discussed in the introduction some hardware may be unsupported by such an old kernel version. Until domain 0 support is merged into the main-line kernel, or a new Xen-patched kernel based on a newer Linux version is released, hardware requiring newer kernels is unusable by domain 0.
 

With the kernel sources installed and our existing, presumably functioning, kernel configuration imported we can start to reconfigure the kernel for use with the Xen virtual machine monitor. The example screen below shows the first critical configuration change which must be made to enable use of the kernel with Xen. Enabling this option also enables the other Xen related options which we shall need to configure later.

Processor type and features
  • [*]
  • Enable Xen compatible kernel
  • CONFIG_X86_64_XEN

Once we have enabled Xen support we can configure the various Xen options using the Xen sub-menu which can be found under Device Drivers on the main kernel configuration page as shown in the example below. As you can see we have enabled the Privileged Guest (domain 0) option and the various Back-end driver support options. As this is a domain 0 kernel none of the front-end driver options are required. Feel free to enable more of the back-end driver options if you wish to experiment with their usage, only the options whose use is explored in this guide are enabled in the example below.

XEN
  • [*]
  • [*]
  • [*]
  • [ ]
  • [ ]
  • [*]
  • [ ]
  • [ ]
  • [ ]
  • [ ]
  • [ ]
  • [ ]
  • [ ]
  • [ ]
  • [ ]
  • [ ]
  • [ ]
  • [*]
  • [*]
  •  
  • [*]
  • Privileged Guest (domain 0)
  • Backend driver support
    • Block-device backend driver
    • Block-device tap backend driver
    • Block-device tap backend driver 2
    • Network-device backend driver
      • Pipelined transmitter (DANGEROUS)
      • Network-device loopback driver
    • PCI-device backend driver
    • TPM-device backend driver
    • SCSI backend driver
    • USB backend driver
  • Block-device frontend driver
  • Network-device frontend driver
  • SCSI frontend driver
  • USB frontend driver
  • User-space granted page access driver
  • Disable serial port drivers
  • Export Xen attributes in sysfs
  • Xen version compatibility (4.0.0 and later)  --->
  • Place shared vCPU info in per-CPU storage
  • CONFIG_XEN_PRIVILEGED_GUEST
  • CONFIG_XEN_BACKEND
  • CONFIG_XEN_BLKDEV_BACKEND
  • CONFIG_XEN_BLKDEV_TAP
  • CONFIG_XEN_BLKDEV_TAP2
  • CONFIG_XEN_NETDEV_BACKEND
  • CONFIG_XEN_NETDEV_PIPELINED_TRANSMITTER
  • CONFIG_XEN_NETDEV_LOOPBACK
  • CONFIG_XEN_PCIDEV_BACKEND
  • CONFIG_XEN_TPMDEV_BACKEND
  • CONFIG_XEN_SCSI_BACKEND
  • CONFIG_XEN_USB_BACKEND
  • CONFIG_XEN_BLKDEV_FRONTEND
  • CONFIG_XEN_NETDEV_FRONTEND
  • CONFIG_XEN_SCSI_FRONTEND
  • CONFIG_XEN_USB_FRONTEND
  • CONFIG_XEN_GRANT_DEV
  • CONFIG_XEN_DISABLE_SERIAL
  • CONFIG_XEN_SYSFS
  •  
  • CONFIG_XEN_VCPU_INFO_PLACEMENT

There are also two options of interest in the Xen driver support menu, as shown below.

Xen driver support
  • [*]
  • [*]
  • Scrub memory before freeing it to Xen
  • Xen /dev/xen/evtchn device
  • CONFIG_XEN_SCRUB_PAGES
  • CONFIG_XEN_DEV_EVTCHN

Now that the kernel is correctly configured for our target system and for Xen we can build a kernel image, install the required modules and copy the kernel image to the boot partition ready for use. The example commands below show how to perform this task for a 64bit x86 system. If you are using a 32bit machine then the last line will need to be modified accordingly.

lisa linux make && make modules_install
lisa linux mount /boot
lisa linux cp arch/x86_64/boot/bzImage /boot/kernel-2.6.34-xen-r4
Caution:
Versions of Xen below 3.4.0 are only able to boot uncompressed kernel images and therefore the vmlinuz file must be copied to the boot partition instead of the more common bzImage file. Later versions of Xen are able to boot the standard compressed linux kernel images and either the usual bzImage, the newer gzImage or the uncompressed vmlinuz may be used.
 

Reconfiguring GRUB

With a suitable kernel image for domain 0 successfully built and copied to the boot partition we must update the boot-loader configuration to offer the Xen hypervisor, with this kernel as the target for domain 0, as the default option. We shall leave our original kernel and the associated lines in our boot-loader configuration intact in case Xen fails to boot correctly for some reason.

lisa nano -w /boot/grub/grub.conf

The example grub.conf below shows how the Xen hypervisor is loaded in place of the usual kernel by the kernel command with the actual Linux kernel we are using for domain 0 loaded by the module command on the following line. As you can see options can be passed to the Xen hypervisor on the kernel line, in this case we are specifying the sEDF scheduler be used, while the normal kernel options are passed on the module line. Both kernel and module lines require the boot device to be specified using standard grub notation and this, as well as the root device to boot from, may need changing according to the specific configuration of the target machine.

/boot/grub/grub.conf
default 0
timeout 30

# XEN 4.0.1
title XEN 4.0.1 (sEDF) / Linux 2.6.34-xen-r4 (hd0,0)
kernel (hd0,0)/xen-4.0.1.gz sched=sedf
module (hd0,0)/kernel-2.6.34-xen-r4 root=/dev/md2

# Fall-back
title Gentoo Linux 2.6.38-r6
root (hd0,0)
kernel /boot/kernel-2.6.38-gentoo-r6 root=/dev/md2
Information:
It is a sensible precaution (as shown above) to leave a non Xen enabled kernel image and the associated boot-loader configuration in place as upgrades to the Xen hypervisor can result in a non-functional system which is difficult to recover unless the system can be booted without Xen so that a correctly functioning version of the hypervisor can be restored.
 

Configuring OpenRC

Since the stabilisation of sys-apps/baselayout-2.0 and the migration to sys-apps/openrc an attempt to auto-detect the runtime environment during the boot sequence is no longer made by default. So that the openrc scripts know that we are running as a Xen domain 0 we should supply this information by modifying the rc_sys entry in the /etc/rc.conf file as shown in the example below.

/etc/rc.conf
rc_sys=""
rc_sys="xen0"

Rebooting into Xen

Now that the Xen hypervisor and domain 0 kernel have been installed and the boot-loader and openrc appropriately reconfigured we can unmount the boot partition and reboot the machine as shown below.

lisa umount /boot
lisa shutdown -r now && exit

If everything went well you should see the Xen hypervisor initialise and then pass control to the domain 0 kernel which should boot in the normal way. If the reboot was done remotely and you are in doubt as to the identity of the kernel which is currently running you can check that it is indeed the Xen compatible kernel we built earlier with the command shown below.

lisa uname -sr
Linux 2.6.34-xen-r4