Why exactly?

I wanted to have duos (dual-boot) of Windows and Fedora on my “server”. However, I don’t want to reboot every five minutes unlike the olden days. I also wanted to game natively on this system.

One could run a VM under KVM and setup PCIe passthrough for peripherals and I intend to set it up and try gaming modest titles once my new GPU arrives. But there is another orthogonal way of solving this, booting the same disk both as bare-metal and as VM.

Edit (2022-10-31): I have at last succeded in passthrough + paravirtualization method, a new post will arrive soon.


  • A disk (in this case, Samsung 860 EVO) to install OS (Windows 10)
  • Linux QEMU/KVM hypervisor
  • A workstation with virt-manager installed

How to?

I will be using VM domain name win10-860evo and SSD id ata-Samsung_SSD_860_250GB_RANDOM. Change them accordingly.

Read libvirt docs for domain, especially disks. For quick reference check this askubuntu post.

Create VM and add disk as block

Use BIOS/Q35 for system, UEFI was not tested.

  1. Under virt-manager connect to the server and create a new VM. During configuration do not add a disk to it just yet.
  2. Add a virtio disk either way:
  • using virt-manager. Then edit it into following structure in XML tab:
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/disk/by-id/ata-Samsung_SSD_860_EVO_250GB_RANDOM'/>
      <target dev='vda' bus='virtio'/>
      <boot order='1'/>
  • manually editing (add above struct into <devices> block in file vm-win10-860evo.xml):
virsh dumpxml --domain win10-860evo > vm-win10-860evo.xml`
vim vm-win10-860evo.xml
virsh define vm-win10-860evo.xml

Install Operating System

I will not reproduce installing Windows on a QEMU/KVM virtual machine here. But remember to insert VirtIO Windows drivers .iso file during installation and load storage drivers to be able to at least do partitioning and thereafter install Windows.

Note: I had come across blue screen of death when first installed on bare-metal and later booted in VM. On the other hand, approach above just worked.

Note: This installs guest drivers, which might be sensed by some proprietary software and keep user from using itself due to virtualization restrictions. Unlikely, yet be warned.

Edit (2022-10-31): The solution to blue screen of death was this SO answer in my case. The gist is to boot an installation media and add driver to already installed system. Even though a SATA disk is being used, passing the block device requires Virtio drivers I guess.

Couple of reboots

Post-installation, ensure the system is bootable in both cases. If you get blue screen of death for some reason, try Windows Update and cross your fingers. If still not working, uncross your fingers and perform a startup repair using a USB/DVD installation media.

Edit (2022-10-31): It is suggested that one runs Windows Update only in one of the two boot modes, namely either on bare-metal or on virtual machine.

What about…

Let’s get technicalities out of the way:

  • OS need not be a Windows version.
  • One could absolutely multi-boot within the VM. It just was not desirable.
  • UEFI, I currently can’t try that as my existing GPU is too old.
  • For Linux, one could definitely use LVM volume groups as disks (/dev/mapper/...), yet one would need to solve the problem of boot partition on bare-metal. I can think of creating a seperate primary boot partition on a disk other than the LVM disk. This seperate boot disk would need to be mounted as a whole into the VM as second virtual disk. That way both boot scenarios could refer to a boot partition on their second disk by UUID (given a multi-disk capable configuration of GRUB2).

If I test UEFI boot, I will post it. For the LVM case, anyone foolhardy enough to do this please hit me up!