One idea I've toyed with is to just something like the esxi hypervisor or Unraid installed on my machine and make the GPU and USB host devices directly available to the VMs through device passthrough. Then I could have whatever OSes I want installed on the machine with near-native performance because the guest OS has direct hardware access. Then you'd also have the ability to spoof or disable hardware you didn't want the OS to have access to (eg. microphone and camera devices).
Can you recommend some resources to learn more about this?
This all works using a fairly recent (past 5 years or so) feature of modern CPUs called device virtualization. This allows a virtual machine to have direct access to hardware installed in the host machine (eg. a GPU). So when you boot up the VM the hardware shows up in device manager, you have to install the device drivers, and the VM can use the hardware without any performance penalty. The caveat is that multiple VMs can't use the same device at the same time unless you have hardware that is explicitly designed for it (eg. some very high-end GPUs designed for large VM systems).
Two main VM environments people use for this are VMWare ESXi and the KVM system in Linux. This video is of a guy doing this in VMWare, this video is of a guy doing this in a commercial variant of the Linux KVM system.
I've experimented with this using VMWare since I already had an ESXi server with a GPU handy and found it to be pretty straightforward. But the KVM stuff being based on a full-blown Linux kernel gives a lot more flexibility.
This all works using a fairly recent (past 5 years or so) feature of modern CPUs called device virtualization. This allows a virtual machine to have direct access to hardware installed in the host machine (eg. a GPU). So when you boot up the VM the hardware shows up in device manager, you have to install the device drivers, and the VM can use the hardware without any performance penalty. The caveat is that multiple VMs can't use the same device at the same time unless you have hardware that is explicitly designed for it (eg. some very high-end GPUs designed for large VM systems).
Whenever I have tried this, the VM has always been excruciatingly slow. There always seems to be a performance penalty, even when I assign 8 threads and 8 GB of memory (and never load it too much). But I have not used what you mentioned, but simply VMWare Workstation.
Two main VM environments people use for this are VMWare ESXi and the KVM system in Linux. This video is of a guy doing this in VMWare, this video is of a guy doing this in a commercial variant of the Linux KVM system.
Can you recommend some resources to learn more about this?
This all works using a fairly recent (past 5 years or so) feature of modern CPUs called device virtualization. This allows a virtual machine to have direct access to hardware installed in the host machine (eg. a GPU). So when you boot up the VM the hardware shows up in device manager, you have to install the device drivers, and the VM can use the hardware without any performance penalty. The caveat is that multiple VMs can't use the same device at the same time unless you have hardware that is explicitly designed for it (eg. some very high-end GPUs designed for large VM systems).
Two main VM environments people use for this are VMWare ESXi and the KVM system in Linux. This video is of a guy doing this in VMWare, this video is of a guy doing this in a commercial variant of the Linux KVM system.
I've experimented with this using VMWare since I already had an ESXi server with a GPU handy and found it to be pretty straightforward. But the KVM stuff being based on a full-blown Linux kernel gives a lot more flexibility.
Whenever I have tried this, the VM has always been excruciatingly slow. There always seems to be a performance penalty, even when I assign 8 threads and 8 GB of memory (and never load it too much). But I have not used what you mentioned, but simply VMWare Workstation.
Thank you. I'll look into this.
Well, I am convinced. ESXi it is for me.