If I were going to upgrade I'd be looking for the equivalent to the LTSC version of Windows 10. That's the version they developed for large enterprises and regulated industries with all the crap (Cortana, the tracking stuff, the XBox integration stuff, default installed apps, etc...) removed.
One idea I've toyed with is to just something like the esxi hypervisor or Unraid installed on my machine and make the GPU and USB host devices directly available to the VMs through device passthrough. Then I could have whatever OSes I want installed on the machine with near-native performance because the guest OS has direct hardware access. Then you'd also have the ability to spoof or disable hardware you didn't want the OS to have access to (eg. microphone and camera devices).
That would let you install Linux for day-to-day stuff and still have a Windows install for gaming.
Since you mentioned LTT I'll mention he's done this a few times over the years.
Here is a build he did with 7 GPUs that is basically a LAN party in a box.
A few years later he attempted to build a single PC that all his editors could use as an editing workstation
Apparently Unraid is designed to be able to do this sort of thing out of the box and uses KVM under the hood. Though personally I'm more familiar with esxi server, so that is what I use even if it's not quite as powerful (eg. you can't do software RAID with it).
One idea I've toyed with is to just something like the esxi hypervisor or Unraid installed on my machine and make the GPU and USB host devices directly available to the VMs through device passthrough. Then I could have whatever OSes I want installed on the machine with near-native performance because the guest OS has direct hardware access. Then you'd also have the ability to spoof or disable hardware you didn't want the OS to have access to (eg. microphone and camera devices).
Can you recommend some resources to learn more about this?
This all works using a fairly recent (past 5 years or so) feature of modern CPUs called device virtualization. This allows a virtual machine to have direct access to hardware installed in the host machine (eg. a GPU). So when you boot up the VM the hardware shows up in device manager, you have to install the device drivers, and the VM can use the hardware without any performance penalty. The caveat is that multiple VMs can't use the same device at the same time unless you have hardware that is explicitly designed for it (eg. some very high-end GPUs designed for large VM systems).
Two main VM environments people use for this are VMWare ESXi and the KVM system in Linux. This video is of a guy doing this in VMWare, this video is of a guy doing this in a commercial variant of the Linux KVM system.
I've experimented with this using VMWare since I already had an ESXi server with a GPU handy and found it to be pretty straightforward. But the KVM stuff being based on a full-blown Linux kernel gives a lot more flexibility.
This all works using a fairly recent (past 5 years or so) feature of modern CPUs called device virtualization. This allows a virtual machine to have direct access to hardware installed in the host machine (eg. a GPU). So when you boot up the VM the hardware shows up in device manager, you have to install the device drivers, and the VM can use the hardware without any performance penalty. The caveat is that multiple VMs can't use the same device at the same time unless you have hardware that is explicitly designed for it (eg. some very high-end GPUs designed for large VM systems).
Whenever I have tried this, the VM has always been excruciatingly slow. There always seems to be a performance penalty, even when I assign 8 threads and 8 GB of memory (and never load it too much). But I have not used what you mentioned, but simply VMWare Workstation.
Two main VM environments people use for this are VMWare ESXi and the KVM system in Linux. This video is of a guy doing this in VMWare, this video is of a guy doing this in a commercial variant of the Linux KVM system.
If I were going to upgrade I'd be looking for the equivalent to the LTSC version of Windows 10. That's the version they developed for large enterprises and regulated industries with all the crap (Cortana, the tracking stuff, the XBox integration stuff, default installed apps, etc...) removed.
One idea I've toyed with is to just something like the esxi hypervisor or Unraid installed on my machine and make the GPU and USB host devices directly available to the VMs through device passthrough. Then I could have whatever OSes I want installed on the machine with near-native performance because the guest OS has direct hardware access. Then you'd also have the ability to spoof or disable hardware you didn't want the OS to have access to (eg. microphone and camera devices).
That would let you install Linux for day-to-day stuff and still have a Windows install for gaming.
Since you mentioned LTT I'll mention he's done this a few times over the years.
Here is a build he did with 7 GPUs that is basically a LAN party in a box.
A few years later he attempted to build a single PC that all his editors could use as an editing workstation
Apparently Unraid is designed to be able to do this sort of thing out of the box and uses KVM under the hood. Though personally I'm more familiar with esxi server, so that is what I use even if it's not quite as powerful (eg. you can't do software RAID with it).
Can you recommend some resources to learn more about this?
This all works using a fairly recent (past 5 years or so) feature of modern CPUs called device virtualization. This allows a virtual machine to have direct access to hardware installed in the host machine (eg. a GPU). So when you boot up the VM the hardware shows up in device manager, you have to install the device drivers, and the VM can use the hardware without any performance penalty. The caveat is that multiple VMs can't use the same device at the same time unless you have hardware that is explicitly designed for it (eg. some very high-end GPUs designed for large VM systems).
Two main VM environments people use for this are VMWare ESXi and the KVM system in Linux. This video is of a guy doing this in VMWare, this video is of a guy doing this in a commercial variant of the Linux KVM system.
I've experimented with this using VMWare since I already had an ESXi server with a GPU handy and found it to be pretty straightforward. But the KVM stuff being based on a full-blown Linux kernel gives a lot more flexibility.
Whenever I have tried this, the VM has always been excruciatingly slow. There always seems to be a performance penalty, even when I assign 8 threads and 8 GB of memory (and never load it too much). But I have not used what you mentioned, but simply VMWare Workstation.
Thank you. I'll look into this.