From Mageia wiki
Jump to: navigation, search


Virtualisation in Mageia

By virtualisation we here mean more or less emulating one or more computers on which you can install other operating system(s).

From Wikipedia on Virtualization and Hypervisor:

"Hardware virtualization is not the same as hardware emulation. In hardware emulation, a piece of hardware imitates another, while in hardware virtualization, a hypervisor (a piece of software) imitates a particular piece of computer hardware or the entire computer. Furthermore, a hypervisor is not the same as an emulator; both are computer programs that imitate hardware, but their domain of use in language differs."

"A hypervisor (or virtual machine monitor, VMM, virtualizer) is a kind of emulator; it is computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine."

This page is meant as an explanation on how the virtualisation pieces fit together in Mageia.

Alternatives to virtualisation

Maybe you do not need virtualisation of hardware.

Containers

An alternative you may consider is Containers, and for that Mageia have Docker and lxc.

Systemd-nspawn

The systemd-nspawn utility is a way to run containers on Linux. You can think of systemd-nspawn as a sort of chroot on steriods. It's not a method you want in a production environment, or something that you want for daily usage, but it's a great way to learn more about container technology, and can be used to test and develop software, or you can use it to create packages.

Just want to run specific programs

If you are only hunting for running programs you could not find packaged in Mageia repositories, read Ways to install programs, where we explain how to use programs of Appimage format and run them securely in firejail, and how to set up and use flatpak which is a great ecosystem with many programs. And we also explain a few other ways too.

Virtualisation options

Basically, this is about choosing the hypervisor. In Mageia you have the following options:

VirtualBox

VirtualBox is good for sporadic desktop use. This is probably the easiest alternative if you are new to virtualisation, and for example want to run MS Windows in a virtual machine with some program not available in another way.

Mageia wiki, Wikipedia, virtualbox.org

VMware

VMware is a corporate product meant for dedicated servers. It is not fully open source, so we do not have it packaged but instead describe installing it.

Mageia wiki, Wikipedia, vmware.com

QEMU

QEMU is a "domain builder", it emulates certain hardware to be able to run a guest machine on it.

Wikipedia, qemu.org

KVM

KVM is a hypervisor that's run as a kernel module. It takes advantage of, but also requires certain CPU and BIOS hardware instructions if you have them (enabled). It uses QEMU to build the domain.

Wikipedia, linux-kvm.org/

Xen

Xen is a hypervisor that's loaded before the kernel. It can take advantage of certain CPU and BIOS hardware instructions if you have them (enabled). Xen has 2 ways of virtualisation: one is with those instructions to get a full virtualisation (uses qemu to build the domain), And paravirtualisation, which doesn't require them, and doesn't fully virtualise hardware, but depends on Xen-specific drivers (doesn't build a domain as such), and since it sort of shares the kernel and hardware a bit, makes it only usable for Linux guests, although these are by definition more (resource-)efficient.

Mageia wiki, Wikipedia, xenproject.org

Managers

VirtualBox and VMware include graphical interfaces / control panels. Mageia also have graphical managers for the other hypervisors:

Virt-Manager

Virt-Manager is a graphical front-end to manage KVM, Xen or QEMU virtual machines, running either locally or remotely. It also works with lxc containers.

Mageia wiki, Wikipedia, virt-manager.org

GNOME Boxes

Gnome Boxes is a more desktop oriented graphical application to view, access, and manage remote and virtual systems. It shares a lot of code with the virt-manager project, mainly in the form of libvirt, libosinfo and qemu.

Wikipedia, GNOME

Virtualisation layers

Virtualisation is quite tricky with all the different components, this is a small explanation wrt libvirtd.

KVM:

  1. host kernel
    • kvm module (kvm_intel or kvm_amd)
  2. qemu tools
  3. libvirtd virt. toolset driver (qemu/kvm)
  4. libvirtd
  5. libvirt client (like virt-manager, virt-viewer, virt-install, virsh) (can be remote)

Xen:

  1. Hypervisor
  2. host Kernel
  3. (xend: only used with the xm/xapi toolset)
  4. virtualisation toolset (xm/xl/xapi)
    • for full virtualisation, this uses qemu
  5. libvirtd virt. toolset driver (libvirt-xend, libvirt-xenlight, ...)
  6. libvirtd
  7. libvirt client (like virt-manager, virt-viewer, virt-install, virsh) (can be remote)

I'll note that xend/xm toolset is being deprecated from Xen 4.3 onwards, making this a lot simpler.

Xen vs KVM

  • Paravirtualisation: this is very powerful if you have only linux guests, there's specific drivers, but you boot guests from an external precompiled static kernel, making this the most efficient way... plus you don't need any special hardware. This can't quite compare, so I'm leaving it out and compare only the so-called Xen-HVM guests with KVM. in Xen-HVM however, you can still use the efficient PV drivers.
  • Xen has a separate hypervisor that's loaded BEFORE the kernel. This has advantages and disadvantages, you have an extra part to configure, but it has the benefit that you can change kernels and still be compatible.
  • Xen has the additional advantage that it can use both AMD and Intel virtualisation hardware - and even (live-)migrate hosts between them.
  • Xen is more difficult to set up than KVM
  • KVM is developed in-kernel, but Xen is getting all it's advantages in the kernel as well. (You used to need a different kernel.)
  • KVM starts a guest as a qemu-kvm process in the host; while Xen starts the guest via the hypervisor (which is on a higher level than the kernel), this means that The host is also a (privileged) guest to the hypervisor.
  • As a result of the above, I think that KVM is useful for adding it to your desktop, while I think that Xen is more suited for a dedicated hypervisor, in large environments (multiple hypervisor, needing high uptime, etc)

How does qemu tie into all of this

Qemu has been around for a long time... It emulates hardware so you can do stuff with it.

qemu has different profiles:

  • A large set of different architectures
  • But also qemu-kvm and qemu-xen

you can execute these directly with different parameters, but this isn't so easy.

For Xen, there's some tricky stuff here:

  • In the Xen build, there's a heavily modified qemu source tarball inside. they build their own version of qemu so as to have all the interesting stuff, called xen-qemu-traditional; it will be obsoleted later on.
  • They are busy upstreaming this, and mostly it's been done, but some parts are still missing.
  • This means that the qemu-xen binary from the qemu package itself isn't being used.
  • In the Xen build, there's a second qemu tarball, which is the upstream qemu (and a small amount of additional patches), called xen-qemu.
  • This means that the both the xen build and the qemu build have both qemu-xen, but they differ.
  • In the near future, all this mess will disappear and qemu-xen will be built from qemu and xen it'self won't build qemu-xen anymore.

Keep in mind that qemu in Xen is only used by full virtualisation, and thus generally will be only used for non-linux guests.

Libvirt

Libvirt is a management tool for virtualisation. It has backend drivers, a daemon and frontends.

  • Backend drivers are alot and include things like KVM/Xen/OpenVZ/HyperV/Virtualbox/VMWare/... Ideally it's a toolset by which you can manage several types of hypervisors centrally.
  • The deamon is not used for virtualisation itself, but for libvirt clients to connect and managing the underlying hardware (storage pools and networks and starting and stopping guests etc...)
  • virt-manager (remotely/locally) and virsh(locally) are the most used libvirtd frontends.

Wikipedia, Website