Zir Blazer
2013-Sep-17 08:34 UTC
Getting ready for a computer build for IOMMU virtualization, need some input regarding sharing multiple GPUs among VMs
I already sended this, here: http://lists.xen.org/archives/html/xen-users/2013-08/msg00228.html ...but nearly three weeks passed by and I receive no reply. So I reworded slighty some things to make them more understandable, hoping to catch a few replies resending it. I''m currently using Windows XP because it suffices for my needs, so I haven''t had any need to replace it even through it is more than 10 years old. However, it will not last forever, and when that happens, I want to use that opportunity to leave the Microsoft ship instead of jumping into another of their OSes, with Linux being the obvious alternative. Through I always recognize all the Linux pros, my actual experience using it hasn''t been more than a few hours in my lifetime (Some in Ubuntu, and some others in Arch Linux recently). The reasons are, first, that I simply have no need to use it, as WXP is enough for all my needs, and secondly, because I wasn''t easily able to deal with the cons to do so: Having to learn to use another OS before being able to use it with comparable skill and productivity than WXP, but also, because as it isn''t Windows, there are many applications, mainly games, that I wouldn''t easily get running in Linux. Sure, you have WINE, but as far that I know its a hit-or-miss tool, specially when it comes to recent games, so it doesn''t guarantees me anything close to 100% compatibility to say I could simply replace WXP with Linux as my main or only OS. You can have both happily running natively with Dual Boot, through not simultaneously. That means that if I am playing a game in Windows, and want to use Linux for something else, I''m forced to reboot. Not only I waste time doing so, but I also have to close my session with all my applications in use, and it also takes even more time to actually reboot again and get back where I left. So basically, while Dual Boot is functional, it is not for the lazy, as rebooting is annoying. As a guy with an "always-on" actitude that dislike these types of downtimes, for me its pretty much a real party crasher. The last option is running one OS inside a VM in the other, but this one had even worse cons: A Linux VM inside my current Windows would only fulfill an extremely niche purpose, as I would only use it ocassionally if I want to do something, like say, opening a suspicious file that I want to check but could be infected, or browse a site that was hacked recently and may possibily still contain injected code that abuses a Windows browser exploit, so I wouldn''t risk my everyday installation and prefer do it inside a VM (I currently use a WXP VM for this purpose). A Windows VM inside Linux is a no-go from the start, as the main reason to be in Windows is usually games, and without access to GPU acceleration you simply cannot play with it. Usually, there are always people that wonders what Linux can do to catch more marketshare from Windows. The issue relies in what I said above: Even if I want to use Linux, I either can''t leave Windows, or can''t justify the hazzle to use Linux for what I could do in either OS, so in the end for things you can do in either OS, you just pick the one that you have at hand or are more confortable with. I believe that Linux can''t catch up because of that: Either you need it for the niche where it is really good at, or you don''t and have no need to bother with it. This is the reason why people like me would never make the Windows-to-Linux switch unless being forced to do so, as while Linux could do a lot of things, it does not do everything I need to replace Windows as my main OS, and even for those things it can do, as I can also do them in Windows, it isn''t worth spending time with Dual Boots or VMs to actully use it for my daily tasks, so at the end of the day you will spend most of the time in Windows just to avoid those restarting downtimes. The solution would be to be able to use both Linux and Windows simultaneously, and switch them on-the-fly, so you could simply Alt + Tab from one OS to the other one, while trying to be as close as possible to running them native, at least from the compatibility perspective. Indeed, you can''t really do the last part, but as far that I know, you can come very close. This way you could simultaneously use applications from both without the previous choice cons. While virtualization is extremely powerful for the Server market, for the typical user, albeit it is very useful for experimentation, it just serves a niche purpose, because what you could do in a VM wasn''t enough for all daily task, mainly due to the fact that you expect one to at the very least play games. Performance loss aside, the problem has always been compatibility. But thanks to IOMMU virtualization, you can actually do things like VGA passthrough, allowing a VM control of a GPU and so being finally able to play games in a VM. This means that the above solution is finally possible, so a power user could get all of the virtualization beneficts AND with every advance in Hypervisors, lose less regarding running native. This is what I have been preparing for years to jump into. My last mails here has been regarding Hardware compatibility with the IOMMU virtualization feature, but I think I have that already nailed down. So, my next computer build specs will be: Processor: Intel Xeon E3-1245 V3 Haswell Alternatively a cheaper 1225 V3, 200 MHz slower and no Hyper Threading. I''m planning to use the integrated GPU. This Processor its nearly the same than the Core i7 4770 except that the Turbo Frequency is 100 MHz slower, but its slighty cheaper, got ECC Memory support, the integrated GPU is supposed to be able to use professional CAD certified Drivers (A la Firepro or Quadro), AND best of all, a name that stands out of the Desktop crowd. Additionally, I may want to undervolt it, check this Thread: http://forums.anandtech.com/showthread.php?t=2330764 Motherboard: Supermicro X10SAT (C226 Chipset) For my tastes, its a very expensive Motherboard. I know AsRock has been praised for their good VT-d/AMD-Vi support even on the cheap Desktop Motherboards, but I''m extremely attracted to the idea of building a computer with proper, quality Workstation-class parts. As soon as I find it in stock and at a good price in a vendor ships internationally, I''m ordering it along with the Processor. For more info regarding my Motherboard-finding quest, check this Thread: http://forums.anandtech.com/showthread.php?t=2326402 Memory Modules: 32 GB / 4 * 8 GB AMD Performance Edition RP1866 Already purchaseed these, thinking on making a RAMDisk. When I purchased them I didn''t thinked that I would be able to use ECC as I decided to go the Xeon and C226 Chipset way, but oh well. Good thing is that I purchase them before their price skyrocketed. Video Card: 2 * Sapphire Radeon 5770 FLEX Still going strong after 2 years of Bitcoin mining, undervolting them did wonders. Hard Disk: Samsung SpinPoint F3 1 TB Used to be a very popular model 3 years ago. Power Supply: CoolerMaster Extreme Power Plus 460W Still going strong after 4 years. If it could power up my current machine (Same as above but with an Athlon II X4 620 and ASUS M4A785TD-V EVO), it will with the new Haswell. Monitors: Samsung SyncMaster 932N+ and P2370H I''m going to use Dual Monitors. I''m intending on deploying Xen over a minimalistic Linux distribution, that would allow me to do basic administration like save/restore the VMs backup copies and have system diagnostic tools. Arch Linux seems great for that task, through I will have to check what I could add to it to be more user-friendly instead to having to rely only on console commands. The Hypervisor and its OS MUST be rock solid, and I suppose they will also be entirely safe if I don''t allow it to have Internet access by itself, only VMs. I''m intending on using several VMs. These will be: 1 - A Linux VM for all my everyday needs (Browsing, Office work, etc). Maybe Ubuntu. This one should replace all the non-gaming things I currently do on Windows. 2 - A base Windows XP VHD that I would make copies to have as many VMs as needed. Its main purpose will be gaming. Reason why I may need more than one, is because currently, there are many instances where opening more than one game client from a MMO game I play, may sometimes cause a graphics glitch that slow downs my computer to a crawl and usually is unresponsive enough to not allow me to open the Task Manager and close the second client instance. This happen when I try to have two or more Direct3D based games running at the same time (Also, with other game, it can happen that the game client complains that Direct3D couldn''t be loaded, but work properly after closing the already running clients). I have been googling around but can''t find a name for this issue. I believe that having them on their own VM could solve it. 3 - Possibily, a base Windows 7 VHD, also for gaming, for the day that I intend on playing a DirectX 10 only game. 4 - Possibily, another Linux VM where I can do load balancing with 2 ISPs, as its probable that I end up with 2 ISPs on my home and I have yet to find a Dual WAN Router that doesn''t cost a leg and a eye. If this is the case, all the other VMs Internet traffic should be routed via this one, I suppose. 5 - Possibily, another Linux VM where I could send the two Radeons 5770 via passthrough to use them exclusively and unmolested for Litecoin mining, and let the integrated GPU handle everything else. 6 - Possibily, I could get another keyboard, mouse and monitor, and assign them to a VM, that could be used for some guest visitors, so they can simultaneously use my computer to browse, effectively making it a multi-user machine out of a single one. Can also work for a self-hosted LAN party for as long as there are enough USB ports :D Additionally, as I have tons of RAM but no SSD, I will surely use a RAMDisk. Basically, I expect that I would be able to set up a RAMDisk a few GBs worth in size, copy the VHD that I want there, and load the VM at stupidly fast speeds. This should work for any VHD where I want the best possible IOPS performance and don''t mind that it is volatile (Or backup often enough). Up to this point, everything is well. The problem is the next part... As when you do passthrough of a device, neither other VMs nor the Hypervisor can use them without reassigning that device (And that device also needed a Soft Reset function or something like that if I did my homework properly), suddently I have to decide in what VMs I want my 3 GPUs and 2 Monitors (That should be physically connected to them, so video output will be of where the GPU is at that moment) to be at, and what could get away by using emulated Drivers (This should also apply to Audio, but I think than that one is fully emulable). Considering this, I suddently have to really think what goes where, which is what I can''t decide. Assuming I do passthrough of the 2 GPUs to the said Linux VM for mining, I wouldn''t need a Monitor attached to either, and I could do passthrough of the IGP to the Windows gaming VM with Dual Monitors. However, in this case, I suppose that I wouldn''t have video output from neither the Hypervisor nor any of the other VMs, including my everday Linux one, killing the whole point of it, unless that IGP can be automatically reassigned on-the-fly, which I doubt. This means that the most flexible approach, would be to leave the IGP with a single Monitor for the Hypervisor, and each 5770 to a Windows gaming VM, but then, I would be short on one Monitor. Basically, due to the fact that a Monitor is attached to a GPU that may or not be where I want the video output at, I may need to switch Monitors from output often, too. So in order to do it with only passthrough, I will have to take some decisions here. So, I have to find other solutions. There are two that are very interesing, and technically both should be possible, through I don''t know if anyone tried them. The first one, is figuring out if you can route the GPU output somewhere else instead of that Video Card''s own video output. As far that I am aware, there are some Software solutions that allows to do something like this: One is Windows-based Lucid Virtu, and the other nVidia Optimus Drivers. Both are conceptually the same: They switch between the IGP and the discrete GPU depending on the workload. However, the Monitor is always attached to the IGP output, and what they do, is copying the framebuffer from the Video Card to the integrated one, so you can use the discrete GPU for processing while redirecting it to the IGP''s video output. If you can do something like this on Xen, it would be extremely useful, because I could have the two Monitors always attached to the IGP and simply reassign the GPUs to different VMs as needed. Another possible solution, would be assuming that the virtual GPU technologies catch up, as I am aware that XenServer, that is based on Xen, is supposedly able to use a special GPU Hypervisor that allows a single physical GPU to be shared in several VMs simultaneously as a virtual GPU (In the same fashion that VMs currently see the vCPUs). This one sounds like THE ultimate solution. Officially, nVidia support this only on the GRID series, while AMD was going to release the Radeon Sky aimed for the same purpose, through I don''t know what Software solutions it brings. However, it IS possible to mod Video Cards for them to be detected as their professional counterparts and maybe that allows the use of the advanced GPU virtualization technologies only available on these expensive series: http://www.nvidia.com/object/grid-vgx-software.html http://blogs.citrix.com/2013/08/26/preparing-for-true-hardware-gpu-sharing-for-vdi-with-xenserver-xendesktop-and-nvidia-grid/ http://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/ I think that there are some people that likes to mod GeForces to Quadros because they''re easier to passthrough in Xen. But I''m aiming one step above that should I want a GeForce @ Grid mod, as I think that full GPU virtualization would be a killer feature. All my issues are regarding this last part. Do someone have any input regarding what can and can not be currently done to manage this? I will need something quite experimental to make my setup work as I intend it to. Another thing which could be a showstopper, is the 2 GB limit on VMs with VGA passthrough I have been hearing, through I suppose will get fixed in some future Xen version. I''m looking for ideas and people that already tried this experiences to deal with it. _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
Gordan Bobic
2013-Sep-18 13:56 UTC
Re: Getting ready for a computer build for IOMMU virtualization, need some input regarding sharing multiple GPUs among VMs
On Tue, 17 Sep 2013 05:34:08 -0300, Zir Blazer <zir_blazer@hotmail.com> wrote:> I already sended this, here: > > http://lists.xen.org/archives/html/xen-users/2013-08/msg00228.html[huge snip] I suspect the reason you never got a reply is to do with the lack of conciseness of your post - many people probably gave up on reading before you actually got to the point.> My last mails here has been regarding Hardware compatibility with the > IOMMU virtualization feature, but I think I have that already nailed > down. So, my next computer build specs will be: > > Processor: Intel Xeon E3-1245 V3 Haswell > Alternatively a cheaper 1225 V3, 200 MHz slower and no Hyper > Threading. I''m planning to use the integrated GPU. This Processor its > nearly the same than the Core i7 4770 except that the Turbo Frequency > is 100 MHz slower, but its slighty cheaper, got ECC Memory support, > the integrated GPU is supposed to be able to use professional CAD > certified Drivers (A la Firepro or Quadro), AND best of all, a name > that stands out of the Desktop crowd.Intel integrated GPUs are not that great. Unreal Tournament 2K4 runs fine on my Chromebook, but that isn''t exactly a particularly demanding game. If applications like the ones covered by the SPECviewperf benchmark are your primary concern, and you are on a budget, look into getting something like a Quadro 2000. If your main goal is gaming get a GTX480 and BIOS-mod it into a Quadro 6000 - you would get the Quadro performance in SPECviewperf, but you will get perfectly working VGA passthrough. Look here: GTS450 -> Q2000 http://www.altechnative.net/2013/06/23/nvidia-cards-geforce-quadro-and-geforce-modified-into-a-quadro-for-virtualized-gaming/ GTX470 -> Q5000 http://www.altechnative.net/2013/09/17/virtualized-gaming-nvidia-cards-part-2-geforce-quadro-and-geforce-modified-into-a-quadro-higher-end-fermi-models/ If you want something more recent that that, GTX680 can be modified into a Quadro K5000 or half of a Grid K2, but this requires a bit of soldering. The upshot of going for a cheap Quadro or a Quadrified Nvidia card is that rebooting VMs doesn''t cause problems which ATI cards are widely reported to suffer from.> Additionally, I may want to undervolt it, check this Thread: > http://forums.anandtech.com/showthread.php?t=2330764You should be aware that Intel have changed VID control on Haswell and later CPUs, so undervolt tuning based on clock multiplier (e.g. using something like RMClock on Windows or PHC on Linux) no longer works. If you want to use this functionality, you would be better off picking a pre-Haswell CPU. I have this problem with my Chromebook Pixel, which runs uncomfortably (if you keep it on your lap) hot, even when mostly idle.> Motherboard: Supermicro X10SAT (C226 Chipset) > For my tastes, its a very expensive Motherboard. I know AsRock has > been praised for their good VT-d/AMD-Vi support even on the cheap > Desktop Motherboards, but I''m extremely attracted to the idea of > building a computer with proper, quality Workstation-class parts. As > soon as I find it in stock and at a good price in a vendor ships > internationally, I''m ordering it along with the Processor. For more > info regarding my Motherboard-finding quest, check this Thread: > http://forums.anandtech.com/showthread.php?t=2326402I have more or less given up on buying any non-Asus motherboards. Switching to an EVGA SR-2 after completely trouble-free 5 years with my Asus Maximus Extreme has really shown me just how good Asus are compared to other manufacturers. All things being equal, if I was doing my rig for similar purposes as you (two virtual gaming VMs with VGA passthrough, one for me, one for the wife), I would probably get something like an Asus Sabertooth or Crosshair motherboard with an 8-Core AMD CPU. They are reasonably priced, support ECC, and seem to have worked quite well for VGA passthrough for may people on the list. Avoid anything featuring Nvidia NF200 PCIe bridges at all cost. That way lies pain and suffering. I''m in the process of working on two patches for Xen just to make things workable on my EVGA SR-2 (which has ALL if it''s PCIe slots behind NF200 bridges).> Memory Modules: 32 GB / 4 * 8 GB AMD Performance Edition RP1866 > Already purchaseed these, thinking on making a RAMDisk. When I > purchased them I didn''t thinked that I would be able to use ECC as I > decided to go the Xeon and C226 Chipset way, but oh well. Good thing > is that I purchase them before their price skyrocketed.I flat out refuse to run anything without ECC memory these days. This is a major reason why I don''t consider Core i chips an option.> Video Card: 2 * Sapphire Radeon 5770 FLEX > Still going strong after 2 years of Bitcoin mining, undervolting them > did wonders.Did you stability test them? GPUs come pre-overclocked to within 1% of death from the factory.> Hard Disk: Samsung SpinPoint F3 1 TB > Used to be a very popular model 3 years ago.I''d avoid Samsung and WD disks at all cost. They are unreliable and either their SMART lies about reallocated sector counts, or worse, they re-use failing sectors rather than reallocate them. I also wouldn''t consider putting any non-expendable data on anything but ZFS - silent corruption happens far more often than most people imagine, especially on consumer grade desktop disks.> Power Supply: CoolerMaster Extreme Power Plus 460W > Still going strong after 4 years. If it could power up my current > machine (Same as above but with an Athlon II X4 620 and ASUS > M4A785TD-V EVO), it will with the new Haswell. > > Monitors: Samsung SyncMaster 932N+ and P2370H > I''m going to use Dual Monitors.I find that ATI cards struggle with dual monitors, at least the high end ones (I use IBM T221s which appears as 2-4 DVI monitors due to signal bandwidth requirements). It''s fine on Linux with open source ATI drivers (just slow), but on XP I never managed to get this to work at all with desktop stretching - most games don''t see any mode over 800x600. Things work fine with Nvidia cards (except in games that have outright broken multi monitor support, such as Metro Last Light).> I''m intending on deploying Xen over a minimalistic Linux > distribution, > that would allow me to do basic administration like save/restore the > VMs backup copies and have system diagnostic tools. Arch Linux seems > great for that task, through I will have to check what I could add to > it to be more user-friendly instead to having to rely only on console > commands. The Hypervisor and its OS MUST be rock solid, and I suppose > they will also be entirely safe if I don''t allow it to have Internet > access by itself, only VMs.You might want to consdier XenServer (based on CentOS). The main thing I''d suggest is keeping your VM storage on ZFS for easy snapshotting and other manipulation. I use such a setup and it has worked extremely well for me.> 2 - A base Windows XP VHD that I would make copies to have as many > VMs > as needed. Its main purpose will be gaming. Reason why I may need > more > than one, is because currently, there are many instances where > opening > more than one game client from a MMO game I play, may sometimes cause > a graphics glitch that slow downs my computer to a crawl and usually > is unresponsive enough to not allow me to open the Task Manager and > close the second client instance. This happen when I try to have two > or more Direct3D based games running at the same time (Also, with > other game, it can happen that the game client complains that > Direct3D > couldn''t be loaded, but work properly after closing the already > running clients). I have been googling around but can''t find a name > for this issue. I believe that having them on their own VM could > solve > it.I didn''t think firing up two D3D games at the same time was even possible - or are you talking about just minimizing one?> 4 - Possibily, another Linux VM where I can do load balancing with 2 > ISPs, as its probable that I end up with 2 ISPs on my home and I have > yet to find a Dual WAN Router that doesn''t cost a leg and a eye. If > this is the case, all the other VMs Internet traffic should be routed > via this one, I suppose.You may find this is a lot easier to do on the host and carefully choosing which devices to bridge VMs onto (i.e. in front or behind the firewall bit). Load balancing across ISPs is always problematic, though. There are all sorts of issues that crop up.> 5 - Possibily, another Linux VM where I could send the two Radeons > 5770 via passthrough to use them exclusively and unmolested for > Litecoin mining, and let the integrated GPU handle everything else.My understanding was that cost effectiveness of electricity + ammortized hardware cost of hardware was nowdays such that it makes mining cost-ineffective. But whatever floats your boat.> 6 - Possibily, I could get another keyboard, mouse and monitor, and > assign them to a VM, that could be used for some guest visitors, so > they can simultaneously use my computer to browse, effectively making > it a multi-user machine out of a single one. Can also work for a > self-hosted LAN party for as long as there are enough USB ports :DI have had issues with USB port passing - specifically due to interrupt sharing. The only thing I''ve managed to get working reliably is passing USB ports that don''t share interrupts with anything else, and ports frequently do share interrupts (even if not device IDs). This is with passing PCI USB controller devices. I found passing USB devices directly was problematic in other ways, not least of which was the extra CPU usage and perceivable response lag. It also occurs to me that by this point you will need a LOT of PCIe slots for all your GPUs and USB controllers. And a lot of desk space for all the monitors, keyboards and mice.> Additionally, as I have tons of RAM but no SSD, I will surely use a > RAMDisk.32GB of RAM doesn''t sound at all like much relative to what you are actually trying to do. I''m doing half that much and am thinking it would be handy to upgrade my machine from 48 to 96GB of RAM.> Basically, I expect that I would be able to set up a RAMDisk > a few GBs worth in size, copy the VHD that I want there, and load the > VM at stupidly fast speeds. This should work for any VHD where I want > the best possible IOPS performance and don''t mind that it is volatile > (Or backup often enough).Even with PV drivers you''ll likely bottleneck on CPU before you hit the throughput a decent SSD is capable of. Especially if you run your VMs off ZFS like I do. Granted, this is a ZFS issue, but I find present day storage is too unreliable to be entrusted to any FS without ZFS'' extra checksumming and auto-healing features.> Up to this point, everything is well. The problem is the next part... > > As when you do passthrough of a device, neither other VMs nor the > Hypervisor can use them without reassigning that deviceThat is indeed correct - you cannot share a PCI device between multiple VMs simultaneously.> (And that > device also needed a Soft Reset function or something like that if I > did my homework properly),You didn''t do your homework correctly. Resetting is not really that much of an issue if you pick your hardware carefully (e.g. avoid NF200 PCIe bridges, ATI GPUs and motherboards that people on this list haven''t extensively tested to work in a trouble-free way).> suddently I have to decide in what VMs I > want my 3 GPUs and 2 Monitors (That should be physically connected to > them, so video output will be of where the GPU is at that moment) to > be at, and what could get away by using emulated Drivers (This should > also apply to Audio, but I think than that one is fully emulable).Not on XP it isn''t. None of the QEMU emulated audio devices have drivers on XP and later. Latest upstream QEMU supposedly has Intel HDA audio emulation, but I haven''t been able to test that yet. The two things that I have verified to work OK is PCI passthrough of audio devices (I am using an Sound Blaster PCIe) and USB audio hanging off of PCI passthrough USB controllers. So you''ll need yet more PCI/PCIe slots and/or USB ports with non-shared interrupts.> Considering this, I suddently have to really think what goes where, > which is what I can''t decide. > Assuming I do passthrough of the 2 GPUs to the said Linux VM for > mining, I wouldn''t need a Monitor attached to either, and I could do > passthrough of the IGP to the Windows gaming VM with Dual Monitors.There have been some long threads on the list recently about IGP VGA passthrough. It looks like results are very hardware dependant. And I''d be very surprised if you manage to get some serious non-retro gaming done on the IGP. I''m currently using a quadrified GTX480 (to Quadro 6000) for a 2560x1600 monitor and a quadrified GTX680 (to Quadro K5000) for a 3840x2400 monitor, and the I could do with more GPU power on both.> However, in this case, I suppose that I wouldn''t have video output > from neither the Hypervisor nor any of the other VMs, including my > everday Linux one, killing the whole point of it, unless that IGP can > be automatically reassigned on-the-fly, which I doubt.XP for one doesn''t seem to handle GPU hotplug properly. Win7 did when I briefly tested it, but that still sounds like a lot of hassle. VGA passthrough is problematic enough as it is without such further complications.> This means that > the most flexible approach, would be to leave the IGP with a single > Monitor for the Hypervisor, and each 5770 to a Windows gaming VM, but > then, I would be short on one Monitor. Basically, due to the fact > that > a Monitor is attached to a GPU that may or not be where I want the > video output at, I may need to switch Monitors from output often, > too. > So in order to do it with only passthrough, I will have to take some > decisions here.More monitors or a KVM switch?> So, I have to find other solutions. There are two that are very > interesing, and technically both should be possible, through I don''t > know if anyone tried them.I think your list of requirements is already spiraling out of control, and if you really are mostly a Windows user, you may find the amount of effort required to achieve a system this complex is not worth the benefits. As I said, I''m finding myself having to write patches to get my system working passably well, and the complexity level on that is considerably lower than what you are proposing.> The first one, is figuring out if you can route the GPU output > somewhere else instead of that Video Card''s own video output. As far > that I am aware, there are some Software solutions that allows to do > something like this: One is Windows-based Lucid Virtu, and the other > nVidia Optimus Drivers. Both are conceptually the same: They switch > between the IGP and the discrete GPU depending on the workload. > However, the Monitor is always attached to the IGP output, and what > they do, is copying the framebuffer from the Video Card to the > integrated one, so you can use the discrete GPU for processing while > redirecting it to the IGP''s video output. If you can do something > like > this on Xen, it would be extremely useful, because I could have the > two Monitors always attached to the IGP and simply reassign the GPUs > to different VMs as needed.You could just use VNC for everything except your gaming VM(s). Dynamic GPU switching between VMs is somewhat ambitious, but by all means, have a go and write it up when/if you get it working properly.> Another possible solution, would be assuming that the virtual GPU > technologies catch up, as I am aware that XenServer, that is based on > Xen, is supposedly able to use a special GPU Hypervisor that allows a > single physical GPU to be shared in several VMs simultaneously as a > virtual GPU (In the same fashion that VMs currently see the vCPUs).This was only announced as a preview feature a few days ago. I wouldn''t count on it being as production-ready as you might hope. VMware ESX does something similar in the most recent version, but it''s only supported on the Nvidia Grid cards. Those are _expensive_ but you might be able to get away with modifying some GTX680/GTX690 cards into Grids to get it working. You''ll have to take a soldering iron to them, though.> This one sounds like THE ultimate solution. Officially, nVidia > support > this only on the GRID series, while AMD was going to release the > Radeon Sky aimed for the same purpose, through I don''t know what > Software solutions it brings. However, it IS possible to mod Video > Cards for them to be detected as their professional counterparts and > maybe that allows the use of the advanced GPU virtualization > technologies only available on these expensive series: > > http://www.nvidia.com/object/grid-vgx-software.html > > http://blogs.citrix.com/2013/08/26/preparing-for-true-hardware-gpu-sharing-for-vdi-with-xenserver-xendesktop-and-nvidia-grid/ > > http://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/You will notice that the list of supported hardware for VMWare VGX is extremely limited - and for very good reason. And other options aren''t yet production ready as far as I can tell - but I could be wrong.> I think that there are some people that likes to mod GeForces to > Quadros because they''re easier to passthrough in Xen. But I''m aiming > one step above that should I want a GeForce @ Grid mod, as I think > that full GPU virtualization would be a killer feature.You better start reading through the Xen source code and get ready to contribute patches to help make this work. :)> All my issues are regarding this last part. Do someone have any input > regarding what can and can not be currently done to manage this? I > will need something quite experimental to make my setup work as I > intend it to.The input for me to be to stop dreaming and come up with a list of requirements a quarter as long, and then maybe you can have something workable in place with less than two weeks of effort (assuming you take two weeks off work and have no other obligations to take up any of your time).> Another thing which could be a showstopper, is the 2 GB limit on VMs > with VGA passthrough I have been hearing, through I suppose will get > fixed in some future Xen version. I''m looking for ideas and people > that already tried this experiences to deal with it.One of the memory limitation bugs has been fixed in Xen 4.3.0. The other (the one I''ve been having, courtesy of the NF200 PCIe bridges being buggy) I have a workable-ish prototype patch for, but it''s nowhere nearly production ready. But these would be the least of your problems with the above requirements. Gordan
H. Sieger
2013-Sep-18 22:47 UTC
Re: Getting ready for a computer build for IOMMU virtualization, need some input regarding sharing multiple GPUs among VMs
Gordan: I was quite surprised to see that you recommend Asus motherboards over others. Particularly the Sabertooth MB made me laugh a bit. Well, I like the Asus boards as long as they don''t run Linux / Xen. Recently some people reported BIOS issues with Asus AMD boards - see here: http://xen.1045712.n5.nabble.com/Xen-IOMMU-disabled-due-to-IVRS-table-Blah-blah-blah-td5716461.html. In my case, an Asus Sabertooth X79, a BIOS update had reportedly broke VT-d/IOMMU (luckily I didn''t upgrade). When writing to Asus support at HQ to inquire about this issue, they first denied it and then told me to use Windows if I want any support. To me this translates into "buy elsewhere" since I use Xen / Linux. Essentially I''m now stuck with an Asus board where I can''t run a BIOS upgrade (upgrading my boards'' BIOS is irreversible). So I guess experience varies. Since you said you use Asus boards, which boards and which BIOS release? Also, do they all run Linux / Xen (or similar with IOMMU)? I don''t know how Supermicro stand, but chances are that they support Linux since they are geared towards servers. ________________________________ From: Gordan Bobic <gordan@bobich.net> To: Zir Blazer <zir_blazer@hotmail.com> Cc: xen-users@lists.xen.org Sent: Wednesday, September 18, 2013 3:56 PM Subject: Re: [Xen-users] Getting ready for a computer build for IOMMU virtualization, need some input regarding sharing multiple GPUs among VMs On Tue, 17 Sep 2013 05:34:08 -0300, Zir Blazer <zir_blazer@hotmail.com> wrote:> I already sended this, here: > > http://lists.xen.org/archives/html/xen-users/2013-08/msg00228.html[huge snip] I suspect the reason you never got a reply is to do with the lack of conciseness of your post - many people probably gave up on reading before you actually got to the point.> My last mails here has been regarding Hardware compatibility with the > IOMMU virtualization feature, but I think I have that already nailed > down. So, my next computer build specs will be: > > Processor: Intel Xeon E3-1245 V3 Haswell > Alternatively a cheaper 1225 V3, 200 MHz slower and no Hyper > Threading. I''m planning to use the integrated GPU. This Processor its > nearly the same than the Core i7 4770 except that the Turbo Frequency > is 100 MHz slower, but its slighty cheaper, got ECC Memory support, > the integrated GPU is supposed to be able to use professional CAD > certified Drivers (A la Firepro or Quadro), AND best of all, a name > that stands out of the Desktop crowd.Intel integrated GPUs are not that great. Unreal Tournament 2K4 runs fine on my Chromebook, but that isn''t exactly a particularly demanding game. If applications like the ones covered by the SPECviewperf benchmark are your primary concern, and you are on a budget, look into getting something like a Quadro 2000. If your main goal is gaming get a GTX480 and BIOS-mod it into a Quadro 6000 - you would get the Quadro performance in SPECviewperf, but you will get perfectly working VGA passthrough. Look here: GTS450 -> Q2000 http://www.altechnative.net/2013/06/23/nvidia-cards-geforce-quadro-and-geforce-modified-into-a-quadro-for-virtualized-gaming/ GTX470 -> Q5000 http://www.altechnative.net/2013/09/17/virtualized-gaming-nvidia-cards-part-2-geforce-quadro-and-geforce-modified-into-a-quadro-higher-end-fermi-models/ If you want something more recent that that, GTX680 can be modified into a Quadro K5000 or half of a Grid K2, but this requires a bit of soldering. The upshot of going for a cheap Quadro or a Quadrified Nvidia card is that rebooting VMs doesn''t cause problems which ATI cards are widely reported to suffer from.> Additionally, I may want to undervolt it, check this Thread: > http://forums.anandtech.com/showthread.php?t=2330764You should be aware that Intel have changed VID control on Haswell and later CPUs, so undervolt tuning based on clock multiplier (e.g. using something like RMClock on Windows or PHC on Linux) no longer works. If you want to use this functionality, you would be better off picking a pre-Haswell CPU. I have this problem with my Chromebook Pixel, which runs uncomfortably (if you keep it on your lap) hot, even when mostly idle.> Motherboard: Supermicro X10SAT (C226 Chipset) > For my tastes, its a very expensive Motherboard. I know AsRock has > been praised for their good VT-d/AMD-Vi support even on the cheap > Desktop Motherboards, but I''m extremely attracted to the idea of > building a computer with proper, quality Workstation-class parts. As > soon as I find it in stock and at a good price in a vendor ships > internationally, I''m ordering it along with the Processor. For more > info regarding my Motherboard-finding quest, check this Thread: > http://forums.anandtech.com/showthread.php?t=2326402I have more or less given up on buying any non-Asus motherboards. Switching to an EVGA SR-2 after completely trouble-free 5 years with my Asus Maximus Extreme has really shown me just how good Asus are compared to other manufacturers. All things being equal, if I was doing my rig for similar purposes as you (two virtual gaming VMs with VGA passthrough, one for me, one for the wife), I would probably get something like an Asus Sabertooth or Crosshair motherboard with an 8-Core AMD CPU. They are reasonably priced, support ECC, and seem to have worked quite well for VGA passthrough for may people on the list. Avoid anything featuring Nvidia NF200 PCIe bridges at all cost. That way lies pain and suffering. I''m in the process of working on two patches for Xen just to make things workable on my EVGA SR-2 (which has ALL if it''s PCIe slots behind NF200 bridges).> Memory Modules: 32 GB / 4 * 8 GB AMD Performance Edition RP1866 > Already purchaseed these, thinking on making a RAMDisk. When I > purchased them I didn''t thinked that I would be able to use ECC as I > decided to go the Xeon and C226 Chipset way, but oh well. Good thing > is that I purchase them before their price skyrocketed.I flat out refuse to run anything without ECC memory these days. This is a major reason why I don''t consider Core i chips an option.> Video Card: 2 * Sapphire Radeon 5770 FLEX > Still going strong after 2 years of Bitcoin mining, undervolting them > did wonders.Did you stability test them? GPUs come pre-overclocked to within 1% of death from the factory.> Hard Disk: Samsung SpinPoint F3 1 TB > Used to be a very popular model 3 years ago.I''d avoid Samsung and WD disks at all cost. They are unreliable and either their SMART lies about reallocated sector counts, or worse, they re-use failing sectors rather than reallocate them. I also wouldn''t consider putting any non-expendable data on anything but ZFS - silent corruption happens far more often than most people imagine, especially on consumer grade desktop disks.> Power Supply: CoolerMaster Extreme Power Plus 460W > Still going strong after 4 years. If it could power up my current > machine (Same as above but with an Athlon II X4 620 and ASUS > M4A785TD-V EVO), it will with the new Haswell. > > Monitors: Samsung SyncMaster 932N+ and P2370H > I''m going to use Dual Monitors.I find that ATI cards struggle with dual monitors, at least the high end ones (I use IBM T221s which appears as 2-4 DVI monitors due to signal bandwidth requirements). It''s fine on Linux with open source ATI drivers (just slow), but on XP I never managed to get this to work at all with desktop stretching - most games don''t see any mode over 800x600. Things work fine with Nvidia cards (except in games that have outright broken multi monitor support, such as Metro Last Light).> I''m intending on deploying Xen over a minimalistic Linux > distribution, > that would allow me to do basic administration like save/restore the > VMs backup copies and have system diagnostic tools. Arch Linux seems > great for that task, through I will have to check what I could add to > it to be more user-friendly instead to having to rely only on console > commands. The Hypervisor and its OS MUST be rock solid, and I suppose > they will also be entirely safe if I don''t allow it to have Internet > access by itself, only VMs.You might want to consdier XenServer (based on CentOS). The main thing I''d suggest is keeping your VM storage on ZFS for easy snapshotting and other manipulation. I use such a setup and it has worked extremely well for me.> 2 - A base Windows XP VHD that I would make copies to have as many > VMs > as needed. Its main purpose will be gaming. Reason why I may need > more > than one, is because currently, there are many instances where > opening > more than one game client from a MMO game I play, may sometimes cause > a graphics glitch that slow downs my computer to a crawl and usually > is unresponsive enough to not allow me to open the Task Manager and > close the second client instance. This happen when I try to have two > or more Direct3D based games running at the same time (Also, with > other game, it can happen that the game client complains that > Direct3D > couldn''t be loaded, but work properly after closing the already > running clients). I have been googling around but can''t find a name > for this issue. I believe that having them on their own VM could > solve > it.I didn''t think firing up two D3D games at the same time was even possible - or are you talking about just minimizing one?> 4 - Possibily, another Linux VM where I can do load balancing with 2 > ISPs, as its probable that I end up with 2 ISPs on my home and I have > yet to find a Dual WAN Router that doesn''t cost a leg and a eye. If > this is the case, all the other VMs Internet traffic should be routed > via this one, I suppose.You may find this is a lot easier to do on the host and carefully choosing which devices to bridge VMs onto (i.e. in front or behind the firewall bit). Load balancing across ISPs is always problematic, though. There are all sorts of issues that crop up.> 5 - Possibily, another Linux VM where I could send the two Radeons > 5770 via passthrough to use them exclusively and unmolested for > Litecoin mining, and let the integrated GPU handle everything else.My understanding was that cost effectiveness of electricity + ammortized hardware cost of hardware was nowdays such that it makes mining cost-ineffective. But whatever floats your boat.> 6 - Possibily, I could get another keyboard, mouse and monitor, and > assign them to a VM, that could be used for some guest visitors, so > they can simultaneously use my computer to browse, effectively making > it a multi-user machine out of a single one. Can also work for a > self-hosted LAN party for as long as there are enough USB ports :DI have had issues with USB port passing - specifically due to interrupt sharing. The only thing I''ve managed to get working reliably is passing USB ports that don''t share interrupts with anything else, and ports frequently do share interrupts (even if not device IDs). This is with passing PCI USB controller devices. I found passing USB devices directly was problematic in other ways, not least of which was the extra CPU usage and perceivable response lag. It also occurs to me that by this point you will need a LOT of PCIe slots for all your GPUs and USB controllers. And a lot of desk space for all the monitors, keyboards and mice.> Additionally, as I have tons of RAM but no SSD, I will surely use a > RAMDisk.32GB of RAM doesn''t sound at all like much relative to what you are actually trying to do. I''m doing half that much and am thinking it would be handy to upgrade my machine from 48 to 96GB of RAM.> Basically, I expect that I would be able to set up a RAMDisk > a few GBs worth in size, copy the VHD that I want there, and load the > VM at stupidly fast speeds. This should work for any VHD where I want > the best possible IOPS performance and don''t mind that it is volatile > (Or backup often enough).Even with PV drivers you''ll likely bottleneck on CPU before you hit the throughput a decent SSD is capable of. Especially if you run your VMs off ZFS like I do. Granted, this is a ZFS issue, but I find present day storage is too unreliable to be entrusted to any FS without ZFS'' extra checksumming and auto-healing features.> Up to this point, everything is well. The problem is the next part... > > As when you do passthrough of a device, neither other VMs nor the > Hypervisor can use them without reassigning that deviceThat is indeed correct - you cannot share a PCI device between multiple VMs simultaneously.> (And that > device also needed a Soft Reset function or something like that if I > did my homework properly),You didn''t do your homework correctly. Resetting is not really that much of an issue if you pick your hardware carefully (e.g. avoid NF200 PCIe bridges, ATI GPUs and motherboards that people on this list haven''t extensively tested to work in a trouble-free way).> suddently I have to decide in what VMs I > want my 3 GPUs and 2 Monitors (That should be physically connected to > them, so video output will be of where the GPU is at that moment) to > be at, and what could get away by using emulated Drivers (This should > also apply to Audio, but I think than that one is fully emulable).Not on XP it isn''t. None of the QEMU emulated audio devices have drivers on XP and later. Latest upstream QEMU supposedly has Intel HDA audio emulation, but I haven''t been able to test that yet. The two things that I have verified to work OK is PCI passthrough of audio devices (I am using an Sound Blaster PCIe) and USB audio hanging off of PCI passthrough USB controllers. So you''ll need yet more PCI/PCIe slots and/or USB ports with non-shared interrupts.> Considering this, I suddently have to really think what goes where, > which is what I can''t decide. > Assuming I do passthrough of the 2 GPUs to the said Linux VM for > mining, I wouldn''t need a Monitor attached to either, and I could do > passthrough of the IGP to the Windows gaming VM with Dual Monitors.There have been some long threads on the list recently about IGP VGA passthrough. It looks like results are very hardware dependant. And I''d be very surprised if you manage to get some serious non-retro gaming done on the IGP. I''m currently using a quadrified GTX480 (to Quadro 6000) for a 2560x1600 monitor and a quadrified GTX680 (to Quadro K5000) for a 3840x2400 monitor, and the I could do with more GPU power on both.> However, in this case, I suppose that I wouldn''t have video output > from neither the Hypervisor nor any of the other VMs, including my > everday Linux one, killing the whole point of it, unless that IGP can > be automatically reassigned on-the-fly, which I doubt.XP for one doesn''t seem to handle GPU hotplug properly. Win7 did when I briefly tested it, but that still sounds like a lot of hassle. VGA passthrough is problematic enough as it is without such further complications.> This means that > the most flexible approach, would be to leave the IGP with a single > Monitor for the Hypervisor, and each 5770 to a Windows gaming VM, but > then, I would be short on one Monitor. Basically, due to the fact > that > a Monitor is attached to a GPU that may or not be where I want the > video output at, I may need to switch Monitors from output often, > too. > So in order to do it with only passthrough, I will have to take some > decisions here.More monitors or a KVM switch?> So, I have to find other solutions. There are two that are very > interesing, and technically both should be possible, through I don''t > know if anyone tried them.I think your list of requirements is already spiraling out of control, and if you really are mostly a Windows user, you may find the amount of effort required to achieve a system this complex is not worth the benefits. As I said, I''m finding myself having to write patches to get my system working passably well, and the complexity level on that is considerably lower than what you are proposing.> The first one, is figuring out if you can route the GPU output > somewhere else instead of that Video Card''s own video output. As far > that I am aware, there are some Software solutions that allows to do > something like this: One is Windows-based Lucid Virtu, and the other > nVidia Optimus Drivers. Both are conceptually the same: They switch > between the IGP and the discrete GPU depending on the workload. > However, the Monitor is always attached to the IGP output, and what > they do, is copying the framebuffer from the Video Card to the > integrated one, so you can use the discrete GPU for processing while > redirecting it to the IGP''s video output. If you can do something > like > this on Xen, it would be extremely useful, because I could have the > two Monitors always attached to the IGP and simply reassign the GPUs > to different VMs as needed.You could just use VNC for everything except your gaming VM(s). Dynamic GPU switching between VMs is somewhat ambitious, but by all means, have a go and write it up when/if you get it working properly.> Another possible solution, would be assuming that the virtual GPU > technologies catch up, as I am aware that XenServer, that is based on > Xen, is supposedly able to use a special GPU Hypervisor that allows a > single physical GPU to be shared in several VMs simultaneously as a > virtual GPU (In the same fashion that VMs currently see the vCPUs).This was only announced as a preview feature a few days ago. I wouldn''t count on it being as production-ready as you might hope. VMware ESX does something similar in the most recent version, but it''s only supported on the Nvidia Grid cards. Those are _expensive_ but you might be able to get away with modifying some GTX680/GTX690 cards into Grids to get it working. You''ll have to take a soldering iron to them, though.> This one sounds like THE ultimate solution. Officially, nVidia > support > this only on the GRID series, while AMD was going to release the > Radeon Sky aimed for the same purpose, through I don''t know what > Software solutions it brings. However, it IS possible to mod Video > Cards for them to be detected as their professional counterparts and > maybe that allows the use of the advanced GPU virtualization > technologies only available on these expensive series: > > http://www.nvidia.com/object/grid-vgx-software.html > > http://blogs.citrix.com/2013/08/26/preparing-for-true-hardware-gpu-sharing-for-vdi-with-xenserver-xendesktop-and-nvidia-grid/ > > http://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/You will notice that the list of supported hardware for VMWare VGX is extremely limited - and for very good reason. And other options aren''t yet production ready as far as I can tell - but I could be wrong.> I think that there are some people that likes to mod GeForces to > Quadros because they''re easier to passthrough in Xen. But I''m aiming > one step above that should I want a GeForce @ Grid mod, as I think > that full GPU virtualization would be a killer feature.You better start reading through the Xen source code and get ready to contribute patches to help make this work. :)> All my issues are regarding this last part. Do someone have any input > regarding what can and can not be currently done to manage this? I > will need something quite experimental to make my setup work as I > intend it to.The input for me to be to stop dreaming and come up with a list of requirements a quarter as long, and then maybe you can have something workable in place with less than two weeks of effort (assuming you take two weeks off work and have no other obligations to take up any of your time).> Another thing which could be a showstopper, is the 2 GB limit on VMs > with VGA passthrough I have been hearing, through I suppose will get > fixed in some future Xen version. I''m looking for ideas and people > that already tried this experiences to deal with it.One of the memory limitation bugs has been fixed in Xen 4.3.0. The other (the one I''ve been having, courtesy of the NF200 PCIe bridges being buggy) I have a workable-ish prototype patch for, but it''s nowhere nearly production ready. But these would be the least of your problems with the above requirements. Gordan _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
Zir Blazer
2013-Sep-23 01:20 UTC
Re: Getting ready for a computer build for IOMMU virtualization, need some input regarding sharing multiple GPUs among VMs
> I suspect the reason you never got a reply is to do with the > lack of conciseness of your post - many people probably gave > up on reading before you actually got to the point.Actually, I consider very important that people understand why I''m going through all this ordeal even though, as you said later, I''m mostly a Windows user. These paragraph were so you can get an idea about how I see Xen as a gateway to merge the best of both Windows and Linux worlds, along with solving a few of my personal issues that this type of virtualization could help me with.> If you want something more recent that that, GTX680 can be > modified into a Quadro K5000 or half of a Grid K2, but this > requires a bit of soldering. > > The upshot of going for a cheap Quadro or a Quadrified > Nvidia card is that rebooting VMs doesn''t cause problems > which ATI cards are widely reported to suffer from.I know about the possibility of hard modding GeForces into Quadros, but purchasing a new card for that purpose is totally out of my budget. So I''m limited to whatever I can do with my current ones. My understanding is that ATI Video Cards like the ones I already own were easier to passthrough than non-modded GeForces. No idea about the VM rebooting issues or what generations it applies to.> You should be aware that Intel have changed VID control on Haswell > and later CPUs, so undervolt tuning based on clock multiplier > (e.g. using something like RMClock on Windows or PHC on Linux) > no longer works. If you want to use this functionality, you would > be better off picking a pre-Haswell CPU. I have this problem with > my Chromebook Pixel, which runs uncomfortably (if you keep it on > your lap) hot, even when mostly idle.For mobile computers, pre-Haswell Processors VID can''t be touched by Software, either. I purchased a year ago a Notebook with a Sandy Bridge and after googling around looking for info on that, there are no tools to undervolt neither mobile Sandy Bridge, Ivy Bridge or Haswell. It has to be done via BIOS only, which no Notebook BIOS will let you do. On Desktop platforms I think there were a few tools that did worked to undervolt via Software, but I''m not sure on Haswell generation. People that appreciates undervolt are just a small niche.> I have more or less given up on buying any non-Asus motherboards. > Switching to an EVGA SR-2 after completely trouble-free 5 years > with my Asus Maximus Extreme has really shown me just how good > Asus are compared to other manufacturers. > > All things being equal, if I was doing my rig for similar purposes > as you (two virtual gaming VMs with VGA passthrough, one for me, > one for the wife), I would probably get something like an Asus > Sabertooth or Crosshair motherboard with an 8-Core AMD CPU. They > are reasonably priced, support ECC, and seem to have worked > quite well for VGA passthrough for may people on the list.I find hard to believe that someone here would recommend an ASUS Motherboard considering that they were know to be the less reliable when it came to IOMMU support, which is top priority for my purpose. They''re fine from the Hardware side, but these days where most of the important components are part of either the Processor or Chipset you can''t really go wrong unless its a very, very low end model. Most expensive Motherboards are full of worthless features I don''t need (And even makes virtualization harder, like that PCIe bridge). Besides that, what annoys me the most is the ASUS official "we don''t support Linux, won''t going to spend dev resources on fixing the BIOS so it follows standard specifications instead of expecting Windows to override settings and bad ACPI tables to make it work anyways". That''s the very reason why I''m avoiding them at all cost currently, and suggesting other people to do the same. Oh, and every now and then ASUS has several low end models that are to be avoided, too. Maximus Extreme, Sabertooth or Crosshair aren''t budget Motherboards, most manufacturers don''t screw up badly if you''re spending around 200 U$D or more on a Motherboard - yet ASUS does with IOMMU support. Supermicro is quite expensive, but also a very reputable Server based brand. They''re know for being 24/7 rock solid workhouses, reason why I''m spending soo much on one.> Avoid anything featuring Nvidia NF200 PCIe bridges at all cost. > That way lies pain and suffering. I''m in the process of working > on two patches for Xen just to make things workable on my > EVGA SR-2 (which has ALL if it''s PCIe slots behind NF200 > bridges).Supermicro X10SAT got the three PCIe 1x slots and 4 USB3 ports behind a PLX 8606 chip. Not sure if that one is supported or not, but I don''t plan at the moment to fill those slots, neither.> I flat out refuse to run anything without ECC memory these days. > This is a major reason why I don''t consider Core i chips an > option.There are several Xeons models that do support ECC Memory and are around the same price than a Core iX of similar specs. I think you also need a Server Motherboard based on the C222, C224 or C226 Chipsets too, another one of Intel artificial limits to force you to spend on a more expensive platform. But depending on your budget and what you were going to buy, adding ECC support on an Intel platform could be relatively cheap if you pick proper parts.> Did you stability test them? GPUs come pre-overclocked to within > 1% of death from the factory.The default for my pair of Video Cards was 850 MHz @ 1.1V for GPU, which is the same than reference Radeons 5770, so these are not factory overclocked. They were capable of being 100% stable for BitCoin mining at 850 MHz @ 1V which was what I had them at, along with the VRAM @ 300 MHz. I think that other people with my model (Sapphire Radeon 5770 FLEX) were capable of around 1 GHz with the default cooling, but I didn''t did it because I didn''t wanted to push GPUs that were going to be on for 24/7 nor stressing out the Power Supply. I barely had them running games for more than 10 minutes, they were not the purpose that I purchased them for. But considering that currently I don''t mine any longer, I could repurpose them to other VMs.> I also wouldn''t consider putting any non-expendable data on > anything but ZFS - silent corruption happens far more often > than most people imagine, especially on consumer grade desktop > disks.I would need to purchase a new, 2 TB+ HD to allow for ZFS redundancy on single disk systems. I have around 800 GB of data on this one. At the moment, can''t.> I didn''t think firing up two D3D games at the same time was even > possible - or are you talking about just minimizing one?You can have two D3D games open at the same time even on WXP. However, as there is no proper GPU priority (Like CPU), you literally have no control of where you want the video performance at - League of Legends FPS are usually on the 15-20 FPS mark, and if I have Eve Online on the screen like on the screenshot its around 7-10. Also, remember that both games are running simultaneously powered by my integrated Radeon 4200 that is 5 years or so old, even the Haswell IGP should be 3-5 times more powerful than this. http://imageshack.us/a/img24/4909/3uck.png However, while it IS possible currently for me to run two games simultaneously, I wouldn''t recommend it. Besides performance issues because my old IGP is less than adequate even for a single game, let alone two games at the same time, it is overally buggy. If I were playing Eve Online for some time and tried to open League of Legends, or if I was already logged on, tried to start a game, chances are it gives a D3D error, crashes, or glitches as described before. This is why I believe that having different games simultaneously on separated VMs could increase reliability for use cases that D3D doesn''t handle good enough. Additionally, sometimes you MAY want to use an older Driver version for whatever reason. For example, for my Radeon 4200 I don''t want to use any recent Catalyst Drivers versions because at some point they introduced a 2D acceleration bug that locks my machine. Check this Thread: http://forums.anandtech.com/showthread.php?t=2332798 I can live with these older Drivers with no issue. However, if I want to use the Radeons 5770 for BitCoin/LiteCoin mining or whatever other GPGPU purpose, I need a later version of the Catalyst Drivers because a newer one is required to install some AMD APP SDK versions that I need to use OpenCL. This means that I have to choose between a more stable system, or OpenCL with the newer Drivers but risk some additional freeze scenarios. This could apply to a game-by-game basis if I know that a game doesn''t play nice with an specific Driver, so I could have a VHD with a WXP/W7 install and another set of Drivers specific for that game. For me as a Desktop power user, the power of Xen is all about workarounding against OS and Software limitations that exist because at the time no one considered you could try doing stuff in some specific fashion, usually by running assymetric setups like mine (IGP + 2 discrete GPUs), multimonitor, etc. It is to make Frankenstein builds works as intended when the OS doesn''t know how to do it. If my current machine had IOMMU support, I believe Xen could have helped me by letting me to have a WXP VM with the Radeon 4200 with the stable Drivers, then having a totally separated VM with the pair of Radeons 5770 with newer Drivers to mine, everything working conflictless assuming a perfect scenario.> My understanding was that cost effectiveness of electricity + ammortized > hardware cost of hardware was nowdays such that it makes mining > cost-ineffective. But whatever floats your boat.Where I live at, is STILL profitable, but barely so. Not enough to bother to be honest. I''m not doing it from some months ago due to the Driver issue described above. But originally it was one of the possible uses.> I think your list of requirements is already spiraling out of > control, and if you really are mostly a Windows user, you may > find the amount of effort required to achieve a system this > complex is not worth the benefits. As I said, I''m finding myself > having to write patches to get my system working passably well, > and the complexity level on that is considerably lower than > what you are proposing.I have been sitting on WXP for one decade. Whatever I do now with Xen will probabily last for years, reason why I consider that the time will be worth it considering WXP will fade sooner or later and I will be left in the cold if that happens. With a Hypervisor added to the mix I have much more flexibility to follow almost any path, and simultaneously. The bad part of this is that while I do have the free time to waste making something out of Xen work, I''m NOT a programmer. I barely can do anything above a Hello World. This means that whatever I want to do, will be limited to what Xen can do either by itself or with patches, along with reading tons of guides, etc. If there are things that Xen currently can''t do, they will be dropped out of my list of requeriments until the next Xen version, for as long as it is possible to do via Software modifications. I may be dreaming a bit. My proposed requeriments are for my ultimate production setup, what can be archivable will obviously be less than that. However, for what you''re telling me what can be currently done isn''t really that far from that.> This was only announced as a preview feature a few days ago. > I wouldn''t count on it being as production-ready as you might hope.It is not new. I recall hearing about nVidia Grid and the GPU Hypervisor around 6 months ago. I think I readed somewhere that they already suffered more than a year of delays. Considering the market that it is aimed to and how much the nVidia GRIDs cost, I would be surprise if they are still far from production ready.> In your case I would do as follows (if you accept my suggestion of dumping > the slim hypervisor):The problem is that if I run Xen over a full fledged OS and use it as my everyday Desktop, I''l risk that if I somehow screw it up with its configuration, have to restart it, or whatever else, I will also lose my current work/game session in the Windows VMs. If my main Desktop was inside a VM, I could restart it without disturbing the others VMs, or even restore a full backup VHD if needed. Now, considering that Linux is mostly renowned for being secure and stable (Lets not talk about human errors, specially if they''re beginners...), chances are that doesn''t happen often enough to actually not being a workable setup. But my goal would be the slim Hypervisor for the previous reason, albeit you can argue that its overkill redundancy.> Not sure about your screens but mine and many I have seen allow switching > between multiple inputs (if they have).The Samsung SyncMaster P2370H does have an extra HDMI input, the SyncMaster 932N+ has only one input. I never though about entering the Monitor menu to manually switch input, and how it could serve this purpose.> You might be interested in a recent announcement I saw on the xen-devel mailing list, around work on a graphics virtualization solution : http://lists.xenproject.org/archives/html/xen-devel/2013-09/msg00681.htmlExcellent to hear. Wasn''t aware of that one, would give me yet another possible choice. Overally I''m confident I will get it working close to what I intend. I have to purchase the new parts first before being actually able to even test it. _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
Gordan Bobic
2013-Sep-23 11:09 UTC
Re: Getting ready for a computer build for IOMMU virtualization, need some input regarding sharing multiple GPUs among VMs
On Sun, 22 Sep 2013 22:20:14 -0300, Zir Blazer <zir_blazer@hotmail.com> wrote:>> If you want something more recent that that, GTX680 can be >> modified into a Quadro K5000 or half of a Grid K2, but this >> requires a bit of soldering. >> >> The upshot of going for a cheap Quadro or a Quadrified >> Nvidia card is that rebooting VMs doesn''t cause problems >> which ATI cards are widely reported to suffer from. > > I know about the possibility of hard modding GeForces into Quadros, > but purchasing a new card for that purpose is totally out of my > budget. So I''m limited to whatever I can do with my current ones. > My understanding is that ATI Video Cards like the ones I already own > were easier to passthrough than non-modded GeForces. No idea about > the > VM rebooting issues or what generations it applies to.As far as I can tell (including from personal experience up until a couple of months ago when I finally gave up on any kind of a solution based on ATI cards) - all ATI cards suffer the problems mentioned - and will continue to do so until ATI do something about it in their drivers (e.g. make sure that the driver resets the card back to a clean slate before unloading). Given ATI''s track record of driver Quality, I doubt it will ever happen, even though I don''t imagine it is a particularly difficult thing to fix.>> You should be aware that Intel have changed VID control on Haswell >> and later CPUs, so undervolt tuning based on clock multiplier >> (e.g. using something like RMClock on Windows or PHC on Linux) >> no longer works. If you want to use this functionality, you would >> be better off picking a pre-Haswell CPU. I have this problem with >> my Chromebook Pixel, which runs uncomfortably (if you keep it on >> your lap) hot, even when mostly idle. > > For mobile computers, pre-Haswell Processors VID can''t be touched by > Software, either. I purchased a year ago a Notebook with a Sandy > Bridge and after googling around looking for info on that, there are > no tools to undervolt neither mobile Sandy Bridge, Ivy Bridge or > Haswell. It has to be done via BIOS only, which no Notebook BIOS will > let you do. On Desktop platforms I think there were a few tools that > did worked to undervolt via Software, but I''m not sure on Haswell > generation. People that appreciates undervolt are just a small niche.I am not sure about Sandy Bridge and Ivy Bridge, but everything up to that has controllable VID (within the hard-limited range). I haven''t tried it on anything between Core2 and Haswell since I change my laptops very infrequently. I do know, however, that a number of Atom based laptops and industrial motherboards which do have user-controllable VID - ignore it (Dell Mini, as well as Advancetech motherboards). Anyway, my point is that if you are concerned to any substantial degree about power usage and heat (unlikely give your GPU power draw), you may be better off with a previous generation CPU that actually has working functionality for undervolting. Note that in every case I have seen, once you adjust the voltage, the CPU runs at that voltage under all conditions. That means that the voltage won''t drop below this when the system is idle and clocks down. This results in the idle power usage going up rather than down. It reduces the average power consumption of your system is always running at 100% load, but in a typical case where it is idle 95% of the time, the power consumption will actually go up. So if you are after a power reduction, VID adjusting is the way to do it.>> I have more or less given up on buying any non-Asus motherboards. >> Switching to an EVGA SR-2 after completely trouble-free 5 years >> with my Asus Maximus Extreme has really shown me just how good >> Asus are compared to other manufacturers. >> >> All things being equal, if I was doing my rig for similar purposes >> as you (two virtual gaming VMs with VGA passthrough, one for me, >> one for the wife), I would probably get something like an Asus >> Sabertooth or Crosshair motherboard with an 8-Core AMD CPU. They >> are reasonably priced, support ECC, and seem to have worked >> quite well for VGA passthrough for may people on the list. > > I find hard to believe that someone here would recommend an ASUS > Motherboard considering that they were know to be the less reliable > when it came to IOMMU support, which is top priority for my purpose.Less reliable than what? My experience is that if you want a consumer grade motherboard with decent tweakability, Asus are the only choice even remotely worth considering.> They''re fine from the Hardware side, but these days where most of the > important components are part of either the Processor or Chipset you > can''t really go wrong unless its a very, very low end model. Most > expensive Motherboards are full of worthless features I don''t need > (And even makes virtualization harder, like that PCIe bridge). > Besides > that, what annoys me the most is the ASUS official "we don''t support > Linux, won''t going to spend dev resources on fixing the BIOS so it > follows standard specifications instead of expecting Windows to > override settings and bad ACPI tables to make it work anyways". > That''s > the very reason why I''m avoiding them at all cost currently, and > suggesting other people to do the same. Oh, and every now and then > ASUS has several low end models that are to be avoided, too. Maximus > Extreme, Sabertooth or Crosshair aren''t budget Motherboards, most > manufacturers don''t screw up badly if you''re spending around 200 U$D > or more on a Motherboard - yet ASUS does with IOMMU support.Who doesn''t? EVGA sure do - their flagship $600 SR-2 has all of it''s PCIe slots behind NF200 bridges. If you want something that "just works", your only sane option is one of a handful HP or Dell workstations (one each with any sort of certification from hypervisor vendors) and pay the going rate for them (which is exorbitant, but worth every penny if your time isn''t worthless). And you''ll just have to live with the lack of any kind of tweakability.> Supermicro is quite expensive, but also a very reputable Server based > brand. They''re know for being 24/7 rock solid workhouses, reason why > I''m spending soo much on one.And you have it on good authority (i.e. better than the vendor''s without-prejudice pre-sales insunuation) that that particular motherboard has all of the required features AND that they actually tested it to ensure that virtualization and IOMMU work fine?>> Avoid anything featuring Nvidia NF200 PCIe bridges at all cost. >> That way lies pain and suffering. I''m in the process of working >> on two patches for Xen just to make things workable on my >> EVGA SR-2 (which has ALL if it''s PCIe slots behind NF200 >> bridges). > > Supermicro X10SAT got the three PCIe 1x slots and 4 USB3 ports behind > a PLX 8606 chip. Not sure if that one is supported or not, but I > don''t > plan at the moment to fill those slots, neither.Interestingly, my GTX690 seems to have 3 (!) PLX PCIe bridges on it. I haven''t modified it yet, so I cannot say how well they work, or what odd interractions might arise between those and the NF200.>> I flat out refuse to run anything without ECC memory these days. >> This is a major reason why I don''t consider Core i chips an >> option. > > There are several Xeons models that do support ECC Memory and are > around the same price than a Core iX of similar specs. I think you > also need a Server Motherboard based on the C222, C224 or C226 > Chipsets too, another one of Intel artificial limits to force you to > spend on a more expensive platform. But depending on your budget and > what you were going to buy, adding ECC support on an Intel platform > could be relatively cheap if you pick proper parts.Or you could just pick more or less anything made by AMD, including desktop grade parts with a better price/performance.>> Did you stability test them? GPUs come pre-overclocked to within >> 1% of death from the factory. > > The default for my pair of Video Cards was 850 MHz @ 1.1V for GPU, > which is the same than reference Radeons 5770, so these are not > factory overclocked.Given there is little or no margin for error in them, I consider the _reference_ model to be pre-overclocked.> They were capable of being 100% stable for > BitCoin mining at 850 MHz @ 1V which was what I had them at, along > with the VRAM @ 300 MHz.How are you certain that they weren''t making erroneous calculations? Were you crunching everything twice and cross-comparing just to make sure? Or are you naively assuming that something is stable just because doesn''t outright crash? Did you stability test the hardware at those settings using available tools (e.g. OCCT, FurMark) for 24 continuous hours per tool to confirm error free operation?>> I also wouldn''t consider putting any non-expendable data on >> anything but ZFS - silent corruption happens far more often >> than most people imagine, especially on consumer grade desktop >> disks. > > I would need to purchase a new, 2 TB+ HD to allow for ZFS redundancy > on single disk systems. I have around 800 GB of data on this one. At > the moment, can''t.I guess you don''t consider your data that valuable to you. Each to his own.>> I didn''t think firing up two D3D games at the same time was even >> possible - or are you talking about just minimizing one? > > You can have two D3D games open at the same time even on WXP. > However, > as there is no proper GPU priority (Like CPU), you literally have no > control of where you want the video performance at - League of > Legends > FPS are usually on the 15-20 FPS mark, and if I have Eve Online on > the > screen like on the screenshot its around 7-10. Also, remember that > both games are running simultaneously powered by my integrated Radeon > 4200 that is 5 years or so old, even the Haswell IGP should be 3-5 > times more powerful than this. > > http://imageshack.us/a/img24/4909/3uck.pngRight, running windowed.> Additionally, sometimes you MAY want to use an older Driver version > for whatever reason. For example, for my Radeon 4200 I don''t want to > use any recent Catalyst Drivers versions because at some point they > introduced a 2D acceleration bug that locks my machine. Check this > Thread: > > http://forums.anandtech.com/showthread.php?t=2332798That''s ATI drivers for you.> For me as a Desktop power user, the power of Xen is all about > workarounding against OS and Software limitations that exist because > at the time no one considered you could try doing stuff in some > specific fashion, usually by running assymetric setups like mine (IGP > + 2 discrete GPUs), multimonitor, etc. It is to make Frankenstein > builds works as intended when the OS doesn''t know how to do it. If my > current machine had IOMMU support, I believe Xen could have helped me > by letting me to have a WXP VM with the Radeon 4200 with the stable > Drivers, then having a totally separated VM with the pair of Radeons > 5770 with newer Drivers to mine, everything working conflictless > assuming a perfect scenario.That''s a great theory. Unfortunately, it doesn''t survive contact with reality in most cases. On a lot of hardware you run into firmware, hardware and driver bugs that require all kinds of hacky proprietary patches, similar to the ones I''m having to write for running a similar stack on my hardware.>> My understanding was that cost effectiveness of electricity + > ammortized >> hardware cost of hardware was nowdays such that it makes mining >> cost-ineffective. But whatever floats your boat. > > Where I live at, is STILL profitable, but barely so. Not enough to > bother to be honest. I''m not doing it from some months ago due to the > Driver issue described above. But originally it was one of the > possible uses.I thought most people who still mine do so on proprietary purpose built silicon that is now available, from what I hear.>> I think your list of requirements is already spiraling out of >> control, and if you really are mostly a Windows user, you may >> find the amount of effort required to achieve a system this >> complex is not worth the benefits. As I said, I''m finding myself >> having to write patches to get my system working passably well, >> and the complexity level on that is considerably lower than >> what you are proposing. > > The bad part of this is that while I do have the free time to waste > making something out of Xen work, I''m NOT a programmer. I barely can > do anything above a Hello World. This means that whatever I want to > do, will be limited to what Xen can do either by itself or with > patches, along with reading tons of guides, etc. If there are things > that Xen currently can''t do, they will be dropped out of my list of > requeriments until the next Xen version, for as long as it is > possible > to do via Software modifications. > I may be dreaming a bit. My proposed requeriments are for my ultimate > production setup, what can be archivable will obviously be less than > that. However, for what you''re telling me what can be currently done > isn''t really that far from that.ASSUMING you manage to get some hardware within your budget that is bug free for this kind of operation. That''s a big assumption, and one that is untestable until you acquire some and test it yourself, by which point you have already spent your budget so you''re stuck with it. This is why I''m saying it is of paramount importance that you get yourself a motherboard that other people here have extensively tested with VGA passthrough and large memory allocations and verified it to work in a completely trouble-free way.>> This was only announced as a preview feature a few days ago. >> I wouldn''t count on it being as production-ready as you might hope. > > It is not new. I recall hearing about nVidia Grid and the GPU > Hypervisor around 6 months ago. I think I readed somewhere that they > already suffered more than a year of delays. Considering the market > that it is aimed to and how much the nVidia GRIDs cost, I would be > surprise if they are still far from production ready.That was for VMware, not Xen, AFAIK.>> You might be interested in a recent announcement I saw on the > xen-devel mailing list, around work on a graphics virtualization > solution : > > http://lists.xenproject.org/archives/html/xen-devel/2013-09/msg00681.html > > Excellent to hear. Wasn''t aware of that one, would give me yet > another > possible choice.Once it is production stable and ready for general public non-developer consumption. Gordan
Ole Johan Væringstad
2013-Sep-23 16:16 UTC
Re: Getting ready for a computer build for IOMMU virtualization, need some input regarding sharing multiple GPUs among VMs
Zir Blazer, From what I can tell, you are mostly a Windows user, with the main emphasis on gaming. You want to get into Linux, but you don''t want to dual boot, you want to switch between Windows and Linux instantly and effortlessly. My advice is, get a dual monitor KVM switch, and set up two physical computers, one for gaming, and one for Linux. Share files over LAN. I am currently running a Xen setup with a thin Gentoo Dom0, passing through one Ati Radeon to a Gentoo Desktop, and another Ati Radeon to Win7 64, both as secondary. Yes it works, but there are issues. As mentioned, I cannot reboot domU''s due to the ATI drivers. With games in Windows, there are the occational visual glitches, and texture loading is slow. I am already using a KVM switch to switch between my two desktop DomU''s and my two Macs. Now, if I was half-serious about gaming, I would just move my main VGA 1.5 meters into my previous desktop, currently being (under)used as a git and portage server, and run pure Windows on it, and Xen with non-gaming guests on my current desktop. If I was Dead Serious about gaming, I would move all my data disks into my server, and use that as a Linux Desktop, with pure Windows on my main computer. But then I would probably not buy vt-d/iommu hardware at all. The reason I run my current Xen setup is because I like to fool around with bleeding edge tech. And bleeding edge tech is not stable nor a out-of-the-box working solution with a simple setup. I had to spend a _lot_ of time getting all of this to work, I encountered a lot of frustration, and that is while being very comfortable in a console-only linux environment and enjoying configuring and setting up systems from scratch. If your main emphasis is learning linux and playing around with tech, sure, go for Xen, but expect a very steep learning curve and a lot of hours spent. Setting up something you originally proposed to be working and stable is, in my opinion, and without being demeaning, a pipedream. You are most welcome to prove me wrong. If your main emphasis is gaming, use dedicated machines and a KVM switch. This is friendly advice, and will save you a lot of headaches. Unrelated: Gordan, would you mind sharing some urls on quadrifing? I have a GTX on the shelf and I don''t mind taking an soldering iron to it. - OJ For what it''s worth, my ~working setup: Gigabyte GA-X79S-UP5, i7 3820 xen 4.2.2-r1, xen-tools 4.2.2-r3 from portage (xl toolstack) Dom0: Gentoo kernel 3.8.13, Geforce GT 610 with no drivers. DomU: Gentoo kernel 3.8.13, Asus Radeon HD 7770, secondary passthrough DomU: Windows 7x64 pro, Asus Radeon HD 7970, secondary passthrough I have not tried xen 4.3 yet, because it would take some time to test, which I currently don''t have. On Mon, Sep 23, 2013 at 1:09 PM, Gordan Bobic <gordan@bobich.net> wrote:> On Sun, 22 Sep 2013 22:20:14 -0300, Zir Blazer <zir_blazer@hotmail.com> > wrote: > > If you want something more recent that that, GTX680 can be >>> modified into a Quadro K5000 or half of a Grid K2, but this >>> requires a bit of soldering. >>> >>> The upshot of going for a cheap Quadro or a Quadrified >>> Nvidia card is that rebooting VMs doesn''t cause problems >>> which ATI cards are widely reported to suffer from. >>> >> >> I know about the possibility of hard modding GeForces into Quadros, >> but purchasing a new card for that purpose is totally out of my >> budget. So I''m limited to whatever I can do with my current ones. >> My understanding is that ATI Video Cards like the ones I already own >> were easier to passthrough than non-modded GeForces. No idea about the >> VM rebooting issues or what generations it applies to. >> > > As far as I can tell (including from personal experience up until > a couple of months ago when I finally gave up on any kind of a > solution based on ATI cards) - all ATI cards suffer the problems > mentioned - and will continue to do so until ATI do something > about it in their drivers (e.g. make sure that the driver resets > the card back to a clean slate before unloading). Given ATI''s > track record of driver Quality, I doubt it will ever happen, even > though I don''t imagine it is a particularly difficult thing to fix. > > > You should be aware that Intel have changed VID control on Haswell >>> and later CPUs, so undervolt tuning based on clock multiplier >>> (e.g. using something like RMClock on Windows or PHC on Linux) >>> no longer works. If you want to use this functionality, you would >>> be better off picking a pre-Haswell CPU. I have this problem with >>> my Chromebook Pixel, which runs uncomfortably (if you keep it on >>> your lap) hot, even when mostly idle. >>> >> >> For mobile computers, pre-Haswell Processors VID can''t be touched by >> Software, either. I purchased a year ago a Notebook with a Sandy >> Bridge and after googling around looking for info on that, there are >> no tools to undervolt neither mobile Sandy Bridge, Ivy Bridge or >> Haswell. It has to be done via BIOS only, which no Notebook BIOS will >> let you do. On Desktop platforms I think there were a few tools that >> did worked to undervolt via Software, but I''m not sure on Haswell >> generation. People that appreciates undervolt are just a small niche. >> > > I am not sure about Sandy Bridge and Ivy Bridge, but everything up to > that has controllable VID (within the hard-limited range). I haven''t > tried it on anything between Core2 and Haswell since I change my > laptops very infrequently. I do know, however, that a number of Atom > based laptops and industrial motherboards which do have > user-controllable VID - ignore it (Dell Mini, as well as Advancetech > motherboards). > > Anyway, my point is that if you are concerned to any substantial > degree about power usage and heat (unlikely give your GPU power > draw), you may be better off with a previous generation CPU that > actually has working functionality for undervolting. Note that > in every case I have seen, once you adjust the voltage, the CPU > runs at that voltage under all conditions. That means that the > voltage won''t drop below this when the system is idle and clocks > down. This results in the idle power usage going up rather than > down. It reduces the average power consumption of your system is > always running at 100% load, but in a typical case where it is > idle 95% of the time, the power consumption will actually go up. > > So if you are after a power reduction, VID adjusting is the way > to do it. > > > I have more or less given up on buying any non-Asus motherboards. >>> Switching to an EVGA SR-2 after completely trouble-free 5 years >>> with my Asus Maximus Extreme has really shown me just how good >>> Asus are compared to other manufacturers. >>> >>> All things being equal, if I was doing my rig for similar purposes >>> as you (two virtual gaming VMs with VGA passthrough, one for me, >>> one for the wife), I would probably get something like an Asus >>> Sabertooth or Crosshair motherboard with an 8-Core AMD CPU. They >>> are reasonably priced, support ECC, and seem to have worked >>> quite well for VGA passthrough for may people on the list. >>> >> >> I find hard to believe that someone here would recommend an ASUS >> Motherboard considering that they were know to be the less reliable >> when it came to IOMMU support, which is top priority for my purpose. >> > > Less reliable than what? My experience is that if you want a > consumer grade motherboard with decent tweakability, Asus are > the only choice even remotely worth considering. > > > They''re fine from the Hardware side, but these days where most of the >> important components are part of either the Processor or Chipset you >> can''t really go wrong unless its a very, very low end model. Most >> expensive Motherboards are full of worthless features I don''t need >> (And even makes virtualization harder, like that PCIe bridge). Besides >> that, what annoys me the most is the ASUS official "we don''t support >> Linux, won''t going to spend dev resources on fixing the BIOS so it >> follows standard specifications instead of expecting Windows to >> override settings and bad ACPI tables to make it work anyways". That''s >> the very reason why I''m avoiding them at all cost currently, and >> suggesting other people to do the same. Oh, and every now and then >> ASUS has several low end models that are to be avoided, too. Maximus >> Extreme, Sabertooth or Crosshair aren''t budget Motherboards, most >> manufacturers don''t screw up badly if you''re spending around 200 U$D >> or more on a Motherboard - yet ASUS does with IOMMU support. >> > > Who doesn''t? > > EVGA sure do - their flagship $600 SR-2 has all of it''s PCIe slots > behind NF200 bridges. > > If you want something that "just works", your only sane option is > one of a handful HP or Dell workstations (one each with any sort > of certification from hypervisor vendors) and pay the going rate > for them (which is exorbitant, but worth every penny if your time > isn''t worthless). And you''ll just have to live with the lack of > any kind of tweakability. > > > Supermicro is quite expensive, but also a very reputable Server based >> brand. They''re know for being 24/7 rock solid workhouses, reason why >> I''m spending soo much on one. >> > > And you have it on good authority (i.e. better than the vendor''s > without-prejudice pre-sales insunuation) that that particular > motherboard has all of the required features AND that they > actually tested it to ensure that virtualization and IOMMU > work fine? > > > Avoid anything featuring Nvidia NF200 PCIe bridges at all cost. >>> That way lies pain and suffering. I''m in the process of working >>> on two patches for Xen just to make things workable on my >>> EVGA SR-2 (which has ALL if it''s PCIe slots behind NF200 >>> bridges). >>> >> >> Supermicro X10SAT got the three PCIe 1x slots and 4 USB3 ports behind >> a PLX 8606 chip. Not sure if that one is supported or not, but I don''t >> plan at the moment to fill those slots, neither. >> > > Interestingly, my GTX690 seems to have 3 (!) PLX PCIe bridges > on it. I haven''t modified it yet, so I cannot say how well > they work, or what odd interractions might arise between those > and the NF200. > > > I flat out refuse to run anything without ECC memory these days. >>> This is a major reason why I don''t consider Core i chips an >>> option. >>> >> >> There are several Xeons models that do support ECC Memory and are >> around the same price than a Core iX of similar specs. I think you >> also need a Server Motherboard based on the C222, C224 or C226 >> Chipsets too, another one of Intel artificial limits to force you to >> spend on a more expensive platform. But depending on your budget and >> what you were going to buy, adding ECC support on an Intel platform >> could be relatively cheap if you pick proper parts. >> > > Or you could just pick more or less anything made by AMD, including > desktop grade parts with a better price/performance. > > > Did you stability test them? GPUs come pre-overclocked to within >>> 1% of death from the factory. >>> >> >> The default for my pair of Video Cards was 850 MHz @ 1.1V for GPU, >> which is the same than reference Radeons 5770, so these are not >> factory overclocked. >> > > Given there is little or no margin for error in them, I consider the > _reference_ model to be pre-overclocked. > > > They were capable of being 100% stable for >> BitCoin mining at 850 MHz @ 1V which was what I had them at, along >> with the VRAM @ 300 MHz. >> > > How are you certain that they weren''t making erroneous calculations? > Were you crunching everything twice and cross-comparing just to make > sure? Or are you naively assuming that something is stable just > because doesn''t outright crash? > > Did you stability test the hardware at those settings using available > tools (e.g. OCCT, FurMark) for 24 continuous hours per tool to > confirm error free operation? > > > I also wouldn''t consider putting any non-expendable data on >>> anything but ZFS - silent corruption happens far more often >>> than most people imagine, especially on consumer grade desktop >>> disks. >>> >> >> I would need to purchase a new, 2 TB+ HD to allow for ZFS redundancy >> on single disk systems. I have around 800 GB of data on this one. At >> the moment, can''t. >> > > I guess you don''t consider your data that valuable to you. Each to > his own. > > > I didn''t think firing up two D3D games at the same time was even >>> possible - or are you talking about just minimizing one? >>> >> >> You can have two D3D games open at the same time even on WXP. However, >> as there is no proper GPU priority (Like CPU), you literally have no >> control of where you want the video performance at - League of Legends >> FPS are usually on the 15-20 FPS mark, and if I have Eve Online on the >> screen like on the screenshot its around 7-10. Also, remember that >> both games are running simultaneously powered by my integrated Radeon >> 4200 that is 5 years or so old, even the Haswell IGP should be 3-5 >> times more powerful than this. >> >> http://imageshack.us/a/img24/**4909/3uck.png<http://imageshack.us/a/img24/4909/3uck.png> >> > > Right, running windowed. > > > Additionally, sometimes you MAY want to use an older Driver version >> for whatever reason. For example, for my Radeon 4200 I don''t want to >> use any recent Catalyst Drivers versions because at some point they >> introduced a 2D acceleration bug that locks my machine. Check this >> Thread: >> >> http://forums.anandtech.com/**showthread.php?t=2332798<http://forums.anandtech.com/showthread.php?t=2332798> >> > > That''s ATI drivers for you. > > > For me as a Desktop power user, the power of Xen is all about >> workarounding against OS and Software limitations that exist because >> at the time no one considered you could try doing stuff in some >> specific fashion, usually by running assymetric setups like mine (IGP >> + 2 discrete GPUs), multimonitor, etc. It is to make Frankenstein >> builds works as intended when the OS doesn''t know how to do it. If my >> current machine had IOMMU support, I believe Xen could have helped me >> by letting me to have a WXP VM with the Radeon 4200 with the stable >> Drivers, then having a totally separated VM with the pair of Radeons >> 5770 with newer Drivers to mine, everything working conflictless >> assuming a perfect scenario. >> > > That''s a great theory. Unfortunately, it doesn''t survive contact with > reality in most cases. On a lot of hardware you run into firmware, > hardware and driver bugs that require all kinds of hacky proprietary > patches, similar to the ones I''m having to write for running a > similar stack on my hardware. > > > My understanding was that cost effectiveness of electricity + >>> >> ammortized >> >>> hardware cost of hardware was nowdays such that it makes mining >>> cost-ineffective. But whatever floats your boat. >>> >> >> Where I live at, is STILL profitable, but barely so. Not enough to >> bother to be honest. I''m not doing it from some months ago due to the >> Driver issue described above. But originally it was one of the >> possible uses. >> > > I thought most people who still mine do so on proprietary purpose > built silicon that is now available, from what I hear. > > I think your list of requirements is already spiraling out of >>> control, and if you really are mostly a Windows user, you may >>> find the amount of effort required to achieve a system this >>> complex is not worth the benefits. As I said, I''m finding myself >>> having to write patches to get my system working passably well, >>> and the complexity level on that is considerably lower than >>> what you are proposing. >>> >> >> The bad part of this is that while I do have the free time to waste >> making something out of Xen work, I''m NOT a programmer. I barely can >> do anything above a Hello World. This means that whatever I want to >> do, will be limited to what Xen can do either by itself or with >> patches, along with reading tons of guides, etc. If there are things >> that Xen currently can''t do, they will be dropped out of my list of >> requeriments until the next Xen version, for as long as it is possible >> to do via Software modifications. >> I may be dreaming a bit. My proposed requeriments are for my ultimate >> production setup, what can be archivable will obviously be less than >> that. However, for what you''re telling me what can be currently done >> isn''t really that far from that. >> > > ASSUMING you manage to get some hardware within your budget that is > bug free for this kind of operation. That''s a big assumption, and > one that is untestable until you acquire some and test it yourself, > by which point you have already spent your budget so you''re stuck > with it. > > This is why I''m saying it is of paramount importance that you get > yourself a motherboard that other people here have extensively > tested with VGA passthrough and large memory allocations and > verified it to work in a completely trouble-free way. > > > This was only announced as a preview feature a few days ago. >>> I wouldn''t count on it being as production-ready as you might hope. >>> >> >> It is not new. I recall hearing about nVidia Grid and the GPU >> Hypervisor around 6 months ago. I think I readed somewhere that they >> already suffered more than a year of delays. Considering the market >> that it is aimed to and how much the nVidia GRIDs cost, I would be >> surprise if they are still far from production ready. >> > > That was for VMware, not Xen, AFAIK. > > > You might be interested in a recent announcement I saw on the >>> >> xen-devel mailing list, around work on a graphics virtualization >> solution : >> >> http://lists.xenproject.org/**archives/html/xen-devel/2013-** >> 09/msg00681.html<http://lists.xenproject.org/archives/html/xen-devel/2013-09/msg00681.html> >> >> Excellent to hear. Wasn''t aware of that one, would give me yet another >> possible choice. >> > > Once it is production stable and ready for general public non-developer > consumption. > > > Gordan > > ______________________________**_________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
Gordan Bobic
2013-Sep-23 19:07 UTC
Re: Getting ready for a computer build for IOMMU virtualization, need some input regarding sharing multiple GPUs among VMs
On 09/23/2013 05:16 PM, Ole Johan Væringstad wrote:> Unrelated: Gordan, would you mind sharing some urls on quadrifing? > I have a GTX on the shelf and I don''t mind taking an soldering iron to it.The hardware mod part is quite well documented here: http://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/ It''s a fairly epic thread, but well worth a read. Relevant if you are looking to mod a GTX680/690/770 into a Quadro K5000 or Grid K2. If you are looking to mod a GTS450/GTX470/GTX480, you can do so with by changing the strapping bits and device ID in the BIOS. I haven''t gotten around to writing it up yet, unfortunately. I will post here when I have. Gordan
David TECHER
2013-Sep-23 20:39 UTC
Re: Getting ready for a computer build for IOMMU virtualization, need some input regarding sharing multiple GPUs among VMs
For GTX480 to Quadro 6000 you can an find an short description I wrote here (not well documented). http://www.davidgis.fr/blog/index.php?2013/09/18/969-xen-430-vga-passthrough-gtx-480-soft-moded-to-quadro-6000 Everything is a bit new for me. My understanding is a bit limited. So I wrote it as I understood ________________________________ De : Gordan Bobic <gordan@bobich.net> À : Ole Johan Væringstad <ole.johan.varingstad@gmail.com> Cc : xen-users@lists.xen.org Envoyé le : Lundi 23 septembre 2013 21h07 Objet : Re: [Xen-users] Getting ready for a computer build for IOMMU virtualization, need some input regarding sharing multiple GPUs among VMs On 09/23/2013 05:16 PM, Ole Johan Væringstad wrote:> Unrelated: Gordan, would you mind sharing some urls on quadrifing? > I have a GTX on the shelf and I don''t mind taking an soldering iron to it.The hardware mod part is quite well documented here: http://www.eevblog.com/forum/chat/hacking-nvidia-cards-into-their-professional-counterparts/ It''s a fairly epic thread, but well worth a read. Relevant if you are looking to mod a GTX680/690/770 into a Quadro K5000 or Grid K2. If you are looking to mod a GTS450/GTX470/GTX480, you can do so with by changing the strapping bits and device ID in the BIOS. I haven''t gotten around to writing it up yet, unfortunately. I will post here when I have. Gordan _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users