This is an old revision of the document!
Table of Contents
Managing KVM on Debian
Installing KVM
Install the required packages:
sudo apt install --no-install-recommends qemu-system libvirt-clients libvirt-daemon-system virtinst qemu-utils dnsmasq swtpm swtpm-tools ovmf bridge-utils libvirt-daemon-driver-storage-zfs
If another user needs to manage virtual machines, add them to the libvirt group:
sudo adduser <youruser> libvirt
Moving storage to a custom location
By default, a storage pool named default is created. It will store your VM disks in /var/lib/libvirt/images. You can see its configuration by executing:
virsh pool-dumpxml default
If you want to change this to a different location, e.g. /srv/kvm you must first delete the existing pool:
virsh pool-destroy default virsh pool-undefine default
Then, make sure your custom directory exists and has the correct permissions:
mkdir -p /srv/kvm
Then create a new default storage pool by writing its parameters in an XML file:
<pool type="dir">
<name>default</name>
<target>
<path>/srv/kvm</path>
</target>
</pool>
You can also use a ZFS dataset as storage pool, with this alternative XML configuration:
<pool type="zfs">
<name>default</name>
<source>
<name>pool1/kvm</name>
</source>
</pool>
Then, create the pool based on the definition and make sure it autostarts:
virsh pool-define default.xml Pool default defined from pool.xml virsh pool-autostart default Pool default marked as autostarted virsh pool-start default Pool default started
Creating a network bridge
By default, all hosts will be placed in a NAT behind the host's interface. The guests can reach out to the internet, but cannot be directly addressed from it. If you want that, you'll need to create a network bridge.
Creating a bridge will alter your network configuration. It's a good idea to work in a persistant terminal (e.g.
tmux), and have access to the physical console in case the network goes down.
First, look up the configuration of your network adapter in /etc/network/interfaces. My network adapter is called enp9s0. This is what the file looks like originally:
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). source /etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # The primary network interface allow-hotplug enp9s0 iface enp9s0 inet dhcp # This is an autoconfigured IPv6 interface iface enp9s0 inet6 auto
Then we adjust the file so the raw interface doesn't get any configuration assigned. Next we assign that interface to the bridge, and do the necessary configuration there:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface. Not configured here, but in the bridge, hence manual.
# allow-hotplug enp9s0
iface enp9s0 inet manual
# This is an autoconfigured IPv6 interface. Not configured here, but in the bridge, hence manual.
iface enp9s0 inet6 manual
# Bridge configuration
auto br0
iface br0 inet dhcp
bridge_ports enp9s0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
iface br0 inet6 auto
accept_ra 1
To make these changes effective, restart the networking service:
service networking restart
Note that if you've assigned a fixed IP address via a DHCP MAC address registration, the bridge will identify itself with a new MAC address. You will need to update your registration.
Then, define a bridged network in an xml file:
<network> <name>bridged</name> <forward mode="bridge"/> <bridge name="br0"/> </network>
Then, define it with virsh and make sure it is started:
virsh net-define bridged.xml virsh net-autostart bridged virsh net-start bridged
To change the interface of existing VMs to use this bridge, edit their XML configuration and adjust their interface configuration so it looks like this:
<interface type='network'> <mac address='52:54:00:44:c3:e8'/> <source network='bridged'/> <model type='e1000e'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface>
If you want to install new VMs that should use this bridge, use:
virt-install --network network=bridged [...]
Disable VNC to guest
?
Creating a VM
Example for a simple VM:
virt-install --virt-type kvm --name windows-server-2022 \ --cdrom /tank/storage/library/Downloads/SW_DVD9_Win_Server_STD_CORE_2022_2108.32_64Bit_English_DC_STD_MLF_X23-73837.ISO \ --os-variant win2k8 \ --graphics vnc,listen=0.0.0.0,password=foobar --noautoconsole \ --disk size=50 --memory 4096 --network network=bridged
You can then connect with a VNC client to the host to perform the installation. The recommended client is TigerVNC, as it natively supports a protocol extension that maps the keyboard correctly by default.
Some useful options:
–tpm backend.type=emulator,backend.version=2.0,model=tpm-tis: To set a virtual TPM.
More examples can be found in this article. If you want to find out what os-variants are possible, you can list them as follows:
apt install libosinfo-bin osinfo-query os
Listing running VMs
virsh list --all
Deleting a VM
virsh undefine windows-server-2022
If the server had NVRAM enabled, you must specify the nvram option to delete it without errors:
virsh undefine windows-server-2022 --nvram
Stopping a guest
This will ask a VM to shut down:
virsh shutdown <name>
This will force a VM to shut down:
virsh destroy <name>
Editing a VM's XML configuration
virsh edit <name>
Finding the VNC display for a guest
virsh vncdisplay <name>
or
virsh domdisplay <name>
This snippet will list out the displays for all guests:
for dom in $(virsh list --name); do echo -n "$dom: "; virsh domdisplay $dom; done
