Zir Blazer
2013-Aug-28 17:38 UTC
Gettind ready for a computer build for IOMMU virtualization, need some input regarding sharing the GPU among VMs
I suppose some may still remember a few of my Posts in this Mailing List, albeit a few months have passed. Reason why I don''t post here very often is because for me, Mailing Lists are extremely archaic and clumsy to use compared to modern, post-2000 era Forums, but as I want Xen-related help I''m forced to use it. Anyways... I''m currently using Windows XP because it suffices for my needs, so I haven''t had any need to replace it even through it is more than 10 years old. However, it will not last forever, and when that happens, I want to use that opportunity to leave the Microsoft ship instead of jumping into another of their OSes, with Linux being the obvious alternative. Through I always recognize all the Linux pros, my actual experience using it hasn''t been more than a few hours in my lifetime (Some in Ubuntu, and some others in Arch Linux recently). The reasons are, first, that I simply have no need to use it, as WXP is enough for all my needs, and secondly, because I wasn''t easily able to deal with the cons to do so: Having to learn to use another OS before being able to use it with comparable skill and productivity than WXP, but also, because as it isn''t Windows, there are many applications, mainly games, that I wouldn''t easily get running in Linux. Sure, you have WINE, but as far that I know its a hit-or-miss tool, specially when it comes to recent games, so it doesn''t guarantees me anything close to 100% compatibility to say I could simply replace WXP with Linux as my main or only OS. You can have both happily running natively with Dual Boot, through not simultaneously. That means that if I am playing a game in Windows, and want to use Linux for something else, I''m forced to reboot. Not only I waste time doing so, but I also have to close my session with all my applications in use, and it also takes even more time to actually reboot again and get back where I left. So basically, while Dual Boot is functional, it is not for the lazy, as rebooting is annoying. As a guy with an "always-on" actitude that dislike these types of downtimes, for me its pretty much a real party crasher. The last option is running one OS inside a VM in the other, but this one had even worse cons: A Linux VM inside my current Windows would only fulfill an extremely niche purpose, as I would only use it ocassionally if I want to do something, like say, opening a suspicious file that I want to check but could be infected, or browse a site that was hacked recently and may possibily still contain injected code that abuses a Windows browser exploit, so I wouldn''t risk my everyday installation and prefer do it inside a VM (I currently use a WXP VM for this purpose). A Windows VM inside Linux is a no-go from the start, as the main reason to be in Windows is usually games, and without access to GPU acceleration you simply cannot play with it. Usually, there are always people that wonders what Linux can do to catch more marketshare from Windows. The issue relies in what I said above: Even if I want to use Linux, I either can''t leave Windows, or can''t justify the hazzle to use Linux for what I could do in either OS, so in the end for things you can do in either OS, you just pick the one that you have at hand or are more confortable with. I believe that Linux can''t catch up because of that: Either you need it for the niche where it is really good at, or you don''t and have no need to bother with it. This is the reason why people like me would never make the Windows-to-Linux switch unless being forced to do so, as while Linux could do a lot of things, it does not do everything I need to replace Windows as my main OS, and even for those things it can do, as I can also do them in Windows, it isn''t worth spending time with Dual Boots or VMs to actully use it for my daily tasks, so at the end of the day you will spend most of the time in Windows just to avoid those restarting downtimes. The solution would be to be able to use both Linux and Windows simultaneously, and switch them on-the-fly, so you could simply Alt + Tab from one OS to the other one, while trying to be as close as possible to running them native, at least from the compatibility perspective. Indeed, you can''t really do the last part, but as far that I know, you can come very close. This way you could simultaneously use applications from both without the previous choice cons. While virtualization is extremely powerful for the Server market, for the typical user, albeit it is very useful for experimentation, it just serves a niche purpose, because what you could do in a VM wasn''t enough for all daily task, mainly due to the fact that you expect one to at the very least play games. Performance loss aside, the problem has always been compatibility. But thanks to IOMMU virtualization, you can actually do things like VGA passthrough, allowing a VM control of a GPU and so being finally able to play games in a VM. This means that the above solution is finally possible, so a power user could get all of the virtualization beneficts AND with every advance in Hypervisors, lose less regarding running native. This is what I have been preparing for years to jump into. My last mails here has been regarding Hardware compatibility with the IOMMU virtualization feature, but I think I have that already nailed down. So, my next computer build specs will be: Processor: Intel Xeon E3-1245 V3 Haswell Alternatively a cheaper 1225 V3, 200 MHz slower and no Hyper Threading. I''m planning to use the integrated GPU Motherboard: Supermicro X10SAT (C226 Chipset) For my tastes, its a very expensive Motherboard. I know AsRock has been praised for their good VT-d/AMD-Vi support even on the cheap Desktop Motherboards, but I''m extremely attracted to the idea of building a computer with proper, quality Workstation-class parts. As soon as I find it in stock and at a good price in a vendor ships internationally, I''m ordering it along with the Processor. Memory Modules: 32 GB / 4 * 8 GB AMD Performance Edition RP1866 Already purchaseed these, thinking on making a RAMDisk. No ECC that I could use with the Xeon and C226 Chipset, but oh well. Good thing is that I purchase them before their price skyrocketed. Video Card: 2 * Sapphire Radeon 5770 FLEX Still going strong after 2 years of Bitcoin mining, undervolting them did wonders. Hard Disk: Samsung SpinPoint F3 1 TB Used to be a very popular model 3 years ago Power Supply: CoolerMaster Extreme Power Plus 460W Still going strong after 4 years. If it could power up my current machine (Same as above but with an Athlon II X4 620 and ASUS M4A785TD-V EVO), it will with the new Haswell. Monitors: Samsung SyncMaster 932N+ and P2370H I''m going to use Dual Monitors. I''m intending on deploying Xen over a minimalistic Linux distribution, that would allow me to do basic administration like save/restore the VMs backup copies and have system diagnostic tools. Arch Linux seems great for that task, through I will have to check what I could add to it to be more user-friendly instead to having to rely only on console commands. The Hypervisor and its OS MUST be rock solid, and I suppose they will also be entirely safe if I don''t allow it to have Internet access by itself, only VMs. I''m intending on using several VMs. These will be: 1 - A Linux VM for all my everyday needs (Browsing, Office work, etc). Maybe Ubuntu. This one should replace all the non-gaming things I currently do on Windows. 2 - A base Windows XP VHD that I would make copies to have as many VMs as needed. Its main purpose will be gaming. Reason why I may need more than one, is because currently, there are many instances where opening more than one game client from a MMO game I play, may sometimes cause a graphics glitch that slow downs my computer to a crawl and usually is unresponsive enough to not allow me to open the Task Manager and close the second client instance. This happen when I try to have two or more Direct3D based games running at the same time (Also, with other game, it can happen that the game client complains that Direct3D couldn''t be loaded, but work properly after closing the already running clients). I have been googling around but can''t find a name for this issue. I believe that having them on their own VM could solve it. 3 - Possibily, a base Windows 7 VHD, also for gaming, for the day that I intend on playing a DirectX 10 only game. 4 - Possibily, another Linux VM where I can do load balancing with 2 ISPs, as its probable that I end up with 2 ISPs on my home and I have yet to find a Dual WAN Router that doesn''t cost a leg and a eye. If this is the case, all the other VMs Internet traffic should be routed via this one, I suppose. 5 - Possibily, another Linux VM where I could send the two Radeons 5770 via passthrough to use them exclusively and unmolested for Litecoin mining, and let the integrated GPU handle everything else. 6 - Possibily, I could get another keyboard, mouse and monitor, and assign them to a VM, that could be used for some guest visitors, so they can simultaneously use my computer to browse, effectively making it a multi-user machine out of a single one. Can also work for a self-hosted LAN party for as long as there are enough USB ports :D Additionally, as I have tons of RAM but no SSD, I will surely use a RAMDisk. Basically, I expect that I would be able to set up a RAMDisk a few GBs worth in size, copy the VHD that I want there, and load the VM at stupidly fast speeds. This should work for any VHD where I want the best possible IOPS performance and don''t mind that it is volatile (Or backup often enough). Up to this point, everything is well. The problem is the next part... As when you do passthrough of a device, neither other VMs nor the Hypervisor can use them without reassigning that device (And that device also needed a Soft Reset function or something like that if I did my homework properly), suddently I have to decide in what VMs I want my 3 GPUs and 2 Monitors (That should be physically connected to them, so video output will be of where the GPU is at that moment) to be at, and what could get away by using emulated Drivers (This should also apply to Audio, but I think than that one is fully emulable). Considering this, I suddently have to really think what goes where, which is what I can''t decide. Assuming I do passthrough of the 2 GPUs to the said Linux VM for mining, I wouldn''t need a Monitor attached to either, and I could do passthrough of the IGP to the Windows gaming VM with Dual Monitors. However, in this case, I suppose that I wouldn''t have video output from neither the Hypervisor nor any of the other VMs, including my everday Linux one, killing the whole point of it, unless that IGP can be automatically reassigned on-the-fly, which I doubt. This means that the most flexible approach, would be to leave the IGP with a single Monitor for the Hypervisor, and each 5770 to a Windows gaming VM, but then, I would be short on one Monitor. Basically, due to the fact that a Monitor is attached to a GPU that may or not be where I want the video output at, I may need to switch Monitors from output often, too. So in order to do it with only passthrough, I will have to take some decisions here. So, I have to find other solutions. There are both that are very interesing, and technically both should be possible, through I don''t know if anyone tried them. The first one, is figuring out if you can route the GPU output somewhere else instead of that Video Card''s own video output. As far that I am aware, there are some Software solutions that allows to do something like this: One is Windows-based Lucid Virtu, and the other nVidia Optimus Drivers. Both are conceptually the same: They switch between the IGP and the discrete GPU depending on the workload. However, the Monitor is always attached to the IGP output, and what they do, is copying the framebuffer from the Video Card to the integrated one, so you can use the discrete GPU for processing while redirecting it to the IGP''s video output. If you can do something like this on Xen, it would be extremely useful, because I could have the two Monitors always attached to the IGP and simply reassign the GPUs to different VMs as needed. Another possible solution, would be assuming that the virtual GPU technologies catch up, as I am aware that XenServer, that is based on Xen, is supposedly able to use a special GPU Hypervisor that allows a single physical GPU to be shared in several VMs simultaneously as a virtual GPU (In the same fashion that VMs currently see the vCPUs). This one sounds like THE ultimate solution. Officially, nVidia support this only on the GRID series, while AMD was going to release the Radeon Sky aimed for the same purpose, through I don''t know what Software solutions it brings. However, it IS possible to mod Video Cards for them to be detected as their professional counterparts and maybe that allows the use of the advanced GPU virtualization technologies only available on these expensive series: http://www.nvidia.com/object/grid-vgx-software.html http://blogs.citrix.com/2013/08/26/preparing-for-true-hardware-gpu-sharing-for-vdi-with-xenserver-xendesktop-and-nvidia-grid/ http://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/ I think that there are some people that likes to mod GeForces to Quadros because they''re easier to passthrough in Xen. But I''m aiming one step above that should I want a GeForce @ Grid mod, as I think that full GPU virtualization would be a killer feature. All my issues are regarding this last part. Do someone have any input regarding what can and can not be currently done? I will need something quite experimental to make my setup work as I intend it to. Another thing which could be a showstopper, is the 2 GB limit on VMs with VGA passthrough I have been hearing, throughI suppose will get fixed in some future Xen version. I''m looking for ideas and people that already tried this experiences to deal with it. _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
powerhouse64
2013-Sep-18 22:07 UTC
Re: Gettind ready for a computer build for IOMMU virtualization, need some input regarding sharing the GPU among VMs
Running a slim hypervisor (based on Arch linux) and Linux as well as Windows in VMs may be a text book approach, but make things more complicated than they are already for someone with little Linux experience. Arch Linux itself may be a little more challenging than say a desktop distribution such as Linux Mint. While Arch is more up-to-date (or bleeding edge) than say Ubuntu or Debian, you may have to do some tweaking. However, the Arch Linux documentation is the best you can find. I would go a much simpler way, unless you really have a need for making things more complicated (multiple VMs, thin hypervisor, etc.). What I did is install a user friendly desktop Linux OS (Linux Mint 14 Mate, to be precise), then the Xen hypervisor from repository and finally the Windows 7 64 bit VM with VGA passthrough. Here is my how-to, if you are interested: </a> <http://forums.linuxmint.com/viewtopic.php?f=42&t=112013> . I also wrote some little backup scripts and other useful utility scripts and everything works smooth for well over a year now. The only hickup I had was a Xen update that introduced a nasty error-22 bug (if I remember correctly the Xen devs have fixed it, but Ubuntu/Linux Mint are a bit behind in releasing it). Using a full fledged desktop OS as dom0 (the administrative domain) has its pros and cons, but for private users I believe the pros outweigh the cons. So my rig has only 2 GPUs - one for dom0/Linux Mint and one for my Windows VM. Both cards are connected to one screen - the screen has 2 DVI inputs and a VGA input. In your case I would do as follows (if you accept my suggestion of dumping the slim hypervisor): 1. Intel GPU (inside CPU) for Linux / dom0 - set in BIOS as the main GPU to boot with - connected to screen A 2. 1st graphics card for Windows VM (XP or what you use mainly) - connected to screen B 3. 2nd graphics card for Windows 7 or a Linux guest, used only when needed - connected to screen A Not sure about your screens but mine and many I have seen allow switching between multiple inputs (if they have). The above configuration also makes it easy to blacklist the AMD drivers, as you won''t need them for your Linux dom0. As far as I know, you can''t just bind or unbind graphics drivers at will. Once you booted your Windows VM with VGA passthrough, that graphics card is now claimed by Windows. Even after closing the Windows VM chances are that you won''t be able to use that graphics card for say a Linux guest (with VGA passthrough). I haven''t tried newer Xen releases nor the xl tool stack, so things may be a little different now. But I would not build on that. On the other hand, it''s very easy to reclaim your USB devices for dom0 and thus for other VMs. My advice is to take it easy at the beginning and start with a basic configuration with VGA passthrough (secondary passthrough). There are enough things that can go wrong there. Regarding the 2GB limit for domUs (VMs), I never heard about that. My Windows 7 domU gets 24GB. By the way, 32GB are plenty for gaming rigs. The only time I''ve seen real demand for memory is doing photo or video processing, or audio recording/processing. In Linux you can easily assign the /tmp folder(s) to RAM, and fine tune access to a swap partition (if you need one - I prefer to have it). Good luck. -- View this message in context: http://xen.1045712.n5.nabble.com/Gettind-ready-for-a-computer-build-for-IOMMU-virtualization-need-some-input-regarding-sharing-the-GPs-tp5718381p5718703.html Sent from the Xen - User mailing list archive at Nabble.com.
David Sutton
2013-Sep-20 14:59 UTC
Re: Gettind ready for a computer build for IOMMU virtualization, need some input regarding sharing the GPU among VMs
Zir, On Wed, Aug 28, 2013 at 12:38 PM, Zir Blazer <zir_blazer@hotmail.com> wrote: <snip>> > Up to this point, everything is well. The problem is the next part... > > > As when you do passthrough of a device, neither other VMs nor the > Hypervisor can use them without reassigning that device (And that device > also needed a Soft Reset function or something like that if I did my > homework properly), suddently I have to decide in what VMs I want my 3 GPUs > and 2 Monitors (That should be physically connected to them, so video > output will be of where the GPU is at that moment) to be at, and what could > get away by using emulated Drivers (This should also apply to Audio, but I > think than that one is fully emulable). Considering this, I suddently have > to really think what goes where, which is what I can''t decide. > Assuming I do passthrough of the 2 GPUs to the said Linux VM for mining, I > wouldn''t need a Monitor attached to either, and I could do passthrough of > the IGP to the Windows gaming VM with Dual Monitors. However, in this case, > I suppose that I wouldn''t have video output from neither the Hypervisor nor > any of the other VMs, including my everday Linux one, killing the whole > point of it, unless that IGP can be automatically reassigned on-the-fly, > which I doubt. This means that the most flexible approach, would be to > leave the IGP with a single Monitor for the Hypervisor, and each 5770 to a > Windows gaming VM, but then, I would be short on one Monitor. Basically, > due to the fact that a Monitor is attached to a GPU that may or not be > where I want the video output at, I may need to switch Monitors from output > often, too. So in order to do it with only passthrough, I will have to take > some decisions here. > > So, I have to find other solutions. There are both that are very > interesing, and technically both should be possible, through I don''t know > if anyone tried them. > > The first one, is figuring out if you can route the GPU output somewhere > else instead of that Video Card''s own video output. As far that I am aware, > there are some Software solutions that allows to do something like this: > One is Windows-based Lucid Virtu, and the other nVidia Optimus Drivers. > Both are conceptually the same: They switch between the IGP and the > discrete GPU depending on the workload. However, the Monitor is always > attached to the IGP output, and what they do, is copying the framebuffer > from the Video Card to the integrated one, so you can use the discrete GPU > for processing while redirecting it to the IGP''s video output. If you can > do something like this on Xen, it would be extremely useful, because I > could have the two Monitors always attached to the IGP and simply reassign > the GPUs to different VMs as needed. > > Another possible solution, would be assuming that the virtual GPU > technologies catch up, as I am aware that XenServer, that is based on Xen, > is supposedly able to use a special GPU Hypervisor that allows a single > physical GPU to be shared in several VMs simultaneously as a virtual GPU > (In the same fashion that VMs currently see the vCPUs). This one sounds > like THE ultimate solution. Officially, nVidia support this only on the > GRID series, while AMD was going to release the Radeon Sky aimed for the > same purpose, through I don''t know what Software solutions it brings. > However, it IS possible to mod Video Cards for them to be detected as their > professional counterparts and maybe that allows the use of the advanced GPU > virtualization technologies only available on these expensive series: > > http://www.nvidia.com/object/grid-vgx-software.html > > http://blogs.citrix.com/2013/08/26/preparing-for-true-hardware-gpu-sharing-for-vdi-with-xenserver-xendesktop-and-nvidia-grid/ > > http://www.eevblog.com/forum/projects/hacking-nvidia-cards-into-their-professional-counterparts/ > > I think that there are some people that likes to mod GeForces to Quadros > because they''re easier to passthrough in Xen. But I''m aiming one step above > that should I want a GeForce @ Grid mod, as I think that full GPU > virtualization would be a killer feature. > > > All my issues are regarding this last part. Do someone have any input > regarding what can and can not be currently done? I will need something > quite experimental to make my setup work as I intend it to. Another thing > which could be a showstopper, is the 2 GB limit on VMs with VGA passthrough > I have been hearing, throughI suppose will get fixed in some future Xen > version. I''m looking for ideas and people that already tried this > experiences to deal with it. > > You might be interested in a recent announcement I saw on the xen-develmailing list, around work on a graphics virtualization solution : http://lists.xenproject.org/archives/html/xen-devel/2013-09/msg00681.html Regards, David _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users