Kernel-Based Virtual Machine Architecture and Configuration
QEMU and KVM Integration
QEMU serves as a versatile machine emulator operating in user space, leveraging dynamic binary translation to simulate CPU instructions and provide a comprehensive set of hardware models. This enables Guest Operating Systems to interact seamlessly with the host physical hardware. While QEMU possesses the intrinsic capability to virtualize CPU, memory, and I/O independently, its performance is significantly augmented when paired with the Kernel-based Virtual Machine (KVM).
KVM functions as a kernel module that effectively converts the Linux kernel into a native hypervisor. It is responsible for the core virtualization tasks related to CPU execution, memory management, and interrupt handling. By interfacing with the character device /dev/kvm, the QEMU process delegates processor-intensive instructions to the kernel module. In this hybrid architecture, KVM manages the virtualization of compute and memory resources, while QEMU handles the emulation of peripheral I/O devices such as disk controllers, network interface cards, and graphics adapters, resulting in a robust server virtualization platform.
Management Stack
Efficient administration of KVM environments relies on a layered management approach. At the foundation lies libvirt, an open-source API and toolkit designed to provide a unified interface for various hypervisors. It encapsulates the complexity of managing virtual machines, storage pools, and networks, utilizing XML-based definitions to configure managed objects. The suite includes a daemon (libvirtd) and acts as a transparent adapter for upper-level management applications.
Direct interaction with the libvirt API is facilitated through virsh, a command-line utility written in C that serves as an optimal tool for managing individual virtual machine instances. For users requiring a graphical interface, virt-manager offers a desktop environment that communicates with the underlying libvirt daemon. Furthermore, large-scale Infrastructure-as-a-Service (IaaS) platforms like OpenStack integrate directly with libvirt to orchestrate complex cloud operations, including object storage, identity authentication, and network orchestration.
Virtual Network Interface Configuration
To leverage Single Root I/O Virtualization (SR-IOV) for creating virtual network interfaces, first identify the PCI address of the target physical device.
# List network hardware with bus information
lspci -D | grep -i ethernetAssuming the physical device ID is 0000:04:00.0, execute the following command to instantiate a specific number of Virtual Functions (VFs). This example creates two VFs.
# Enable 2 virtual functions on the specified device
echo 2 | sudo tee /sys/bus/pci/devices/0000:04:00.0/sriov_numvfsVerify the successful creation of the virtual interfaces using the listing command.
# Check PCI bus for new virtual ethernet devices
lspci | grep -i ethernetPCI Device Passthrough Implementation
Direct hardware assignment allows a virtual machine to gain exclusive control over a physical PCI device, such as a network card. Be aware that the host operating system will lose access to this device once it is passed through.
Configure the passthrough by editing the target domain's XML definition.
virsh edit guest-vm-nameWithin the <devices> section of the configuration file, append the host device entry referencing the specific PCI coordinates. It is best practice to specify the VFIO driver for isolation.
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio'/>
<source>
<address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/>
</source>
</hostdev>After starting the virtual machine, the assigned network interface will be visible inside the guest operating system as a direct physical hardware resource.