From Mageia wiki
Jump to: navigation, search

Intro

This page explains how to install and configure Xen (if you already know what you're talking about).
Wikipedia defines Xen as "a hypervisor providing services that allow multiple computer operating systems to execute on the same computer hardware concurrently."

dom0

Install mageia

You can use the dual arch CD to install a minimal setup, just make sure you choose custom, and deselect everything. the best results are when using 64bit.

Install xen

  • make sure to install the server kernel
  • install the xen package

configure XEN to boot

  • in your grub menu.lst, copy an entry and prepend a line with: kernel (hd0,0)/xen.gz
  • modify the real kernel line to be something like module (hd0,0)/vmlinuz ...
  • modify the initrd line to be something like module (hd0,0)/initrd.img
  • boot into that entry

xm(xend) vs xl

  • this discussion is obsolete, because xen 4.5 has removed xend (and thus xm toolset)
  • after installing you should have xm and xl commands
  • xl list should show you only the dom0
  • xm is old-style tool and requires xend to run, xl is replacing xm and does not need xend.
  • by default xend does not start at boot time, you will need to do that if you want to use xm toolset
  • xend will set up your networking, but the xl toolset, because it doesn't need xend, does not.
  • if you choose xl, it is advisable to set up a bridge interface in /etc/sysconfig/network-scripts/ .

libvirtd support

  • libvirtd also supports XEN (in old xend and new libxl)
  • install libvirtd
  • i would advise you to configure a network in /etc/libvirtd/network/ that uses the bridge you made with network-scripts (br0).

<bridge name="br0" />
<forward mode="bridge" />

  • for newstyle, you can connect with ssh+xen://ip/ to it
  • libvirtd does only do PV if you choose import disk
  • libvirtd does HVM if you choose boot from CDROM or network
  • you can use virt-manager to connect to it as well
  • it appears libvirtd counts on pygrub and thus needs a kernel on the image

domU

PV

pv is most efficient and is used mostly for linux guests, you can make an image file, loopback mount it, chroot install it; and then use it to boot from. then you'll need a kernel/initrd to use, you can copy it from your host, but build the initrd with dracut, but make it a generic one. there's also pygrub which seems to be able to boot the kernel from your image, but i never tried that.

HVM

hvm is a full virtualisation and can be used for things like windows and whatnot. you can install with CDROM and network for this.

however, these days, HVM is not bad for linux, since PV drivers can be used inside HVM. iow, a perfect way to test iso installs...

on top of that, spice is also added...

spice is like vnc, only better:

  • guests can have spice-vdagent, so that mouse can go in and out of guest window without explicit grab (release with SHIFT+F12)
  • clipboard-crossover (copy paste outside of guest)
  • QXL is not supported yet (more efficient display)
  • spice clients should support usb-redirection (untested)
  • spice client can play the guest audio!

example config:

name="vm-name"
builder="hvm"
boot="cdn"
memory=1024
vcpus=1
vif=['vifname=vif-foo.0,bridge=br-wan,mac=E2:03:BE:F2:59:A2']
disk=['file:/var/lib/libvirt/images/testpxe.img,hda,w',',raw,hdb,ro,cdrom']
keymap="nl-be"
soundhw="hda"
vga="stdvga"
vnc=0
serial='pty'
spice=1
spicelisten="0.0.0.0"
spiceport=6000
spicevdagent=1
spiceagent_mouse=1
spice_clipboard_sharing=1
spiceusbredirection=4
spicedisable_ticketing=1

  • boot="cdn" # first disk, then CD, then network
  • disk=['file:/data/image.img,hda,w'] # prepare disk image with 'dd if=/dev/zero of=/data/image.img bs=1M count=1 seek=16k' for a 16GB sparse disk image
  • disk=[',raw,hdb,ro,cdrom'] # empty cdrom, for removable .iso with "xl insert" command
  • vif=['vifname=vif-foo.0,bridge=br-wan,mac=E2:03:BE:F2:59:A2'] # for a network interface which will be connected to the pre-existing br-wan brige interface
  • soundhw="hda" # with this, intelhda device will produce sound if you connect with spice to your spice-client (cool huh!)
  • vga="stdvga" # qxl is not supported yet
  • serial='pty' # this is to have "xl console" work on HVM, but i wan't successful yet
  • spice* # for getting spice to work
using upstream qemu-xen

I packaged upstream qemu-xen into /usr/bin. that means that if you want to use it, you'll need to override the path and model settings:

device_model_version="qemu-xen"
device_model_override="/usr/bin/qemu-xen"

trying VGA passthrough

if you want to try this, see http://wiki.xen.org/wiki/XenVGAPassthroughTestedAdapters for what graphic cards that might work.

known issues

  • atm libvirtd forces blktap usage, which we don't support (FIXED: failover to unknown type)
  • libvirtd has a problem where the udevadm settle takes too long and libvirtd cannot be connect to it anymore (FIXED)
  • make sure to enable virtualization in BIOS!!!
  • libvirtd needs pygrub and thus a kernel in the image, for paravirtualisation, WORKAROUND: select configure before start and type in the initrd and kernel from host (don't browse, that doesn't work)
  • on failure to create hvm, libvirtd seems to segfault, (it also sometimes happens on domain shutdown), you have to restart the libvirtd service, WORKAROUND: NONE
  • libvirtd needs xenbr0 in default network (even if it's defined differently), WORKAROUND is to select a bridge yourself and input br0 (the name of the bridge)
  • virtmanager detects that this network doesn't support PXE(even though it does) and thus disables the whole network... (when trying to netboot), WORKAROUND: select configure before start and change the network model to e1000.
  • xen-netfront module is required to start PXE installation (HVM) and there's a bug that modules with '-' in it's name fail to load. WORKAROUND: make debug build install those in your PXE environment and using those going to the console and "modprobe xen-netfront" (insmod doesn't work) (FIXED)