At this moment the current state of Xen development is such that only a specially patched kernel can be used for domain 0 as we saw in the previous section. Xen development for guest kernels running as domain U on the other hand has progressed much faster and any kernel since version 2.6.23 supports running as a Xen paravirtualised guest with the correct configuration.
Building Xen guest support on recent kernels is trivial as all the functionality required for running as domain U has already been merged. All that is required is to select the correct configuration options from the configuration menus as shown below. As usual with the kernel configuration menus sometimes options are moved between versions so some options may be located on other pages. Also, because not all kernel options are significant and different use cases will require different configurations, we have not listed all the options from each page just those options which setting should match that shown.
|
|
|
|
|
|
Unless you are planning on connecting physical hardware devices to guest domains without using any form of virtualisation, such as a telephony device for use by a guest operating system, the guest kernel does not require support for PCI-E, PCI-X, PCI, ISA, or any other bus technology. Obviously, if you are intending to connect hardware devices to guest domains an appropriate bus technology should be enabled.
|
|
|
As the hardware presented to guest domains is virtualised by the host domain the Device Drivers section for a guest domain can be mostly left disabled. There are some exceptions to this rule however as the virtualised front-end drivers for block and network devices will be required. As you can see from the example configuration below a guest domain does not even need support for disk drivers as the disks will be presented as virtualised block devices.
|
|
|
Unless you are intending to connect a physical disk device to a guest domain there is no need to select any block device drivers other than the Xen front-end device, as shown below.
|
|
|
As with block device support above unless you are intending to connect a real physical network device to a guest domain, perhaps to isolate a guest on a particular network segment or as part of a firewall installation, support for network devices other than the Xen front-end driver is not required. Obviously, if you are intending to connect a physical network device to a guest domain the appropriate driver options should be selected.
|
|
|
As a paravirtualised guest domain will not have access to a physical terminal or framebuffer device support for the Xen virtual console should be enabled as shown in the example below.
|
|
|
The Xen hypervisor will handle setting the time on a guest domain directly so the Real Time Clock options can all be disabled, as shown below.
|
|
|
The final section of interest contains the Xen specific drivers. As the description provided for these items by the kernel configuration application is fairly extensive, and as we shall be describing some of these options in more detail in later sections, we shall not duplicate this material here. The options enabled in the example below should be suitable for almost all Xen guest installations.
|
|
|
When you have finished configuring the kernel sources which will be used for the guest operating system kernel(s) they can be built as shown below. As you can see we have not installed the kernel modules as they will be used by the guest domains and would only waste space, or possibly even conflict with modules already installed, on the host machine. We shall have to make the modules available to any guest domains which use them later on however as they are still required to be installed there.
Once the kernel has finished compiling it can be copied to a new directory we shall create to store kernel images for our guest domains as shown below. As before we have provided instructions for installing a 64-bit kernel so if you are using a 32-bit operating system the source path will need to be modified. Obviously if you are using a different kernel version to that shown here the destination path should be changed accordingly.
With the kernel for our guest operating system built we can move on to allocating storage for our guest domain. As this is just an example which we shall be using to demonstrate the various features of the Xen virtualisation environment we shall only allocate a very small logical volume. Obviously in a real installation more disk space may be required although a virtual machine can be connected to multiple logical volumes as we shall demonstrate so such a small volume may be all which is required.
Logical volume "someguest-vm" created
As you can see we have added the suffix vm to the volume name with so that we can easily identify that this is the root volume of a virtual machine. When we create addition volumes for the guest domain in later sections we shall use the suffix vx followed by a descriptive purpose of the volume to form a complete volume name such as someguest-vx-database so that volumes can be easily identified on the host machine.
Once we have allocated some storage for our guest domain we need to format that storage using a suitable filesystem. In the example below we have used ext3 with extra inodes as the default is too few for such a small volume. Feel free to use any filesystem supported by the guest kernel such as ext4 or ReiserFS if you wish however if you do use a different filesystem you may need to modify following examples accordingly.
mke2fs 1.41.3 (12-Oct-2008)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
131072 inodes, 262144 blocks
13107 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=268435456
8 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Now that we have somewhere to install our guest domain we can commence with the installation, essentially as one would when performing a normal install according to the Gentoo Handbook◳. The only exception will be that we shall share the portage snapshot and kernel sources from the host installation.
We shall begin the installation by creating a directory to mount the volume we created above, downloading the current stage 3 archive and unpacking that archive using the tar application as shown in the example below.
Now that we have a default installation present in /mnt/gentoo we can copy some of the configuration files from our host system to save us from recreating them. As you can see we have copied the /etc/make.conf file and the /etc/portage directory so that portage will be configured the same on the guest and the host. We have also copied /etc/resolv.conf to the guest so that name resolution will function correctly from the guest system.
The next step is to bind mount some directories from the host to the guest so that we can install the net-fs/nfs-utils package on the guest system. We shall be needing this later as to avoid duplication of the portage tree and kernel sources we shall be sharing them from the host system.
With the portage tree and kernel sources available to the guest domain we can now chroot into the new installation so that we may complete the installation tasks. Once inside the new installation you should set a new root password so that we will be able to access the system when running as a guest domain.
>>> Regenerating /etc/ld.so.cache...lisa / # passwd
New UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Now that we are inside the new installation we can install the net-fs/nfs-utils package. Once the package has been installed we can add the nfsmount to the default run-level so that the portage tree and kernel sources exported by the host can be used on this guest. The sshd daemon should also be added to the default run-level so that we can use ssh to connect to our guest domain.
If your guest kernel will be using any kernel modules then we should install those now so that they are available when we boot our guest domain later. Once the kernel modules are installed we can update the dependency information using the update-modules application as shown in the example.
A default Linux installation will use the /dev/console device for any console output during the boot process. Usually the /dev/console device maps to /dev/tty0 however on a Xen guest domain /dev/tty0 is not connected to anything so the console maps to the /dev/hvc0 device instead. Unfortunately the default inittab file provided with Gentoo Linux does not start an agetty process on this device so the login prompt will not appear when the guest domain is started. If we wish to log-in to our guest domain using the Xen console we will need to add a suitable entry to inittab to start an agetty process on /dev/console as shown below.
c5:2345:respawn:/sbin/agetty 38400 tty5 linux
c6:2345:respawn:/sbin/agetty 38400 tty5 linux
x1:12345:respawn:/sbin/agetty 38400 console linux
# SERIAL CONSOLES
The next step is to configure the network settings for the guest domain. As we shall be using the route networking scripts for our guests in these examples it is important that we select the correct address. Assuming that your network uses the 192.168.0.0/16 address block and does not already use the 192.168.1.0/24 network addresses for anything then they should be usable, as in our example below. If this network block is already in use on your network, or your network currently uses addresses not in the 192.168.0.0/16 range, then you will need to modify the following network examples accordingly.
config_eth0=( "192.168.1.1/24" )
routes_eth0=( "192.168.0.0/24" "default via 192.168.0.1" )
Finally we can configure the disk settings for our guest domain. As you can see from the example below instead of the more common /dev/sd* or /dev/hd* devices for harddisks used on normal installations the Xen virtualisation environment presents disks as /dev/xvd* device nodes. Other than this change of name partitions are still numbered as usual so the first "partition" on a virtual device, which is probably not a partition at all but a logical volume on the host, would be presented to the guest as /dev/xvda1 and is therefore used as the device for the root filesystem in our example below.
# <fs> <mount point> <type> <opts> <dump/pass>
/dev/xvda1 / ext3 noatime 0 1
The remaining set of configuration changes should be familiar to any regular user of Gentoo Linux so they will not be documented here. Some reminders are given below however for your convenience.
The above configuration should be sufficient to allow our guest domain to start correctly and then allow us to connect to that guest using the ssh client application. Once you have made any additional configuration changes you wish it is time to exit the chroot as shown below.
We can then unmount the directories we mounted inside the installation earlier so that we can unmount the volume we are using for our guest domain as exclusive access will be required if it is to operate correctly.
Now that our guest domain is installed and configured we can configure the Xen virtualisation system for that guest domain. The example file below shows all the basic configuration options required to run the guest system we installed above.
# General
name = "someguest";
memory = 128;
# Booting
kernel = "/usr/xen/kernels/linux-2.6.38-gentoo-r6";
root = "/dev/xvda1 rw";
extra = "vdso32=0";
# Virtual harddisk
disk = [ "phy:volumes/someguest-vm,xvda1,w" ];
# Virtual network
vif = [ "ip=192.168.1.1, vifname=vif.someguest" ];
As you can see the configuration file above has been divided into sections to enhance readability. The first section contains general configuration options such as the name of the guest domain and the amount of memory to allocate. The second section contains information relevant to the boot process such as the path to the kernel which the guest domain will be running as well as the device that the guest system should use for the root filesystem. Extra parameters to be supplied to the guest kernel are also specified here using the extra variable.
The third section is more complex and describes the virtual disk presentation for the guest domain. In the above example we are presenting a single LVM volume to the guest which we have specified using phy:volumes/someguest-vm in the example above. We have specified that this volume should be presented to the guest as virtual disk device /dev/xvda1 and that it will be writable. We could also have used the file: directive to mount a file over a loopback device however performance of this method is considerably lower than the phy: method and should therefore only be used when absolutely necessary. Multiple entries can be specified allowing more than one volume to be presented to a guest domain as either another virtual device, /dev/xvdb1 for example, or another partition on an existing virtual device such as /dev/xvda2.
The fourth section contains configuration settings relating to the virtual network interfaces presented to the guest domain. In the above example we provide a single network interface with the IP address 192.168.1.1 and specify that it will be named vif.someguest on the host system. Any unique name can be chosen here however it is customary to name the interface after the guest domain to which it belongs, possibly with a descriptive suffix if multiple interfaces are specified.