I have between 35 and 40 domU''s on the same box running the same applications and each requiring moderate load. Each have between 150 to 512 megs of ram and 50Gb disk space. These domU''s have really become sluggish, even adding a large amount of memory helps only a small amount. This may be normal for a system with this many domU''s, but I''m not really sure how to tell. Hardware is: 1U Acme 5CD16T Xeon 3000/3200 Core 2 Quad 4x SATA Drives (domU''s are set up with LVM) Intel Core 2 Quad Q6600 Quad-Core 2.40GHz 8MB 8Gb Ram The physical server is running Debian stable and Guest domains are Ubuntu Feisty. The VPS loads are often very high, frequently over two and sometimes as high as 8 or 10. Since this is only the second physical server I have set up this way, I''m not really sure how to tell what is possible. The main question I have at the moment is why the dom0 is running so well, but all the domU''s are completely overloaded. Am I guessing correctly that this is because the dom0 is more of a controller rather than actually being responsible for running all these domU''s. Maybe it is just a VPS itself that has special privileges? Secondly, I will be purchasing 2 servers this next week. I was wondering if anyone had suggestions about hardware for comfortably running up to 100 domU''s per server. Thanks, -=nathan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sun, 23 Mar 2008, n8&abby wrote:> I have between 35 and 40 domU''s on the same box running the same > applications and each requiring moderate load. Each have between 150 > to 512 megs of ram and 50Gb disk space. These domU''s have really > become sluggish, even adding a large amount of memory helps only a > small amount. This may be normal for a system with this many domU''s, > but I''m not really sure how to tell. > > Hardware is: > 1U Acme 5CD16T Xeon 3000/3200 Core 2 Quad > 4x SATA Drives (domU''s are set up with LVM) > Intel Core 2 Quad Q6600 Quad-Core 2.40GHz 8MB > 8Gb RamSo you''ve got 4 disk drives handling 40 different domUs... that''s very likely to be your bottle neck, either that or flat out CPU power. If disk is your bottleneck, you want to try to find the domUs doing the most disk I/O and fix that... or give them more memory if they are doing substantial read I/O that might be cacheable. And if you build another server this way you might want to buy the fastest drives you can get/afford. Other than that, no one is really going to be able to tell you much... system tuning is so dependent on the applications being run. You need to be aware of what your bottlenecks are, develop some tools for figuring out how close you are to hitting your limits, and go from there. tools like "xm list" and iostat will make good starting points. xm list gives you cumulative CPU usage, taken over time you can figure out how much is spare. iostat will show you which drives are working and to some degree, how hard. If you don''t have iostat, all the info is in /proc/diskstats, but you''ll have to write some scripts to parse it. Maybe someone else has some better pointers. In my experience, XEN doesn''t need much tuning itself, but your milage may vary :) -Tom _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Nathan, On Sun, Mar 23, 2008 at 01:51:04PM -0700, Tom Brown wrote:> On Sun, 23 Mar 2008, n8&abby wrote: > > >I have between 35 and 40 domU''s on the same box running the same > >applications and each requiring moderate load. Each have between 150 > >to 512 megs of ram and 50Gb disk space. These domU''s have really > >become sluggish, even adding a large amount of memory helps only a > >small amount. This may be normal for a system with this many domU''s, > >but I''m not really sure how to tell. > > > >Hardware is: > >1U Acme 5CD16T Xeon 3000/3200 Core 2 Quad > >4x SATA Drives (domU''s are set up with LVM) > >Intel Core 2 Quad Q6600 Quad-Core 2.40GHz 8MB > >8Gb Ram > > So you''ve got 4 disk drives handling 40 different domUs... that''s very > likely to be your bottle neck, either that or flat out CPU power. If disk > is your bottleneck, you want to try to find the domUs doing the most disk > I/O and fix that... or give them more memory if they are doing substantial > read I/O that might be cacheable. And if you build another server this way > you might want to buy the fastest drives you can get/afford.As Tom says, I would not be trying to run 40 virtual machines from 4 disk spindles, let alone 100, unless I *knew* they were all CPU-limited (which would be a very unusual profile for the average server). If you want to put 100 general purpose servers on 1 piece of hardware I suggest you look into a lot more than 4 disks, and probably look at 10kRPM 2.5" SAS as well, as opposed to what I am guessing are commodity 7200RPM 3.5" SATA disks. Check the iowait % in your domains and dom0 - if it is more than a few percent then it''s IO you are needing i.e. more disks. I always run out of IO before CPU or RAM too, it''s pretty common. Cheers, Andy _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> I have between 35 and 40 domU''s on the same box running the same > applications and each requiring moderate load. Each have between 150 > to 512 megs of ram and 50Gb disk space. These domU''s have really > become sluggish, even adding a large amount of memory helps only a > small amount. This may be normal for a system with this many domU''s, > but I''m not really sure how to tell.High CPU workloads (I''m assuming that''s what you mean) don''t virtualise particularly well. The advantage of virtualisation is that you get to consolidate workloads which are underutilising their current environment into a single environment. If all your DomU''s want to use 100% CPU and you are putting them on a quad core box, then you are going to hit the wall at 4 DomU''s.> Secondly, I will be purchasing 2 servers this next week. I was > wondering if anyone had suggestions about hardware for comfortably > running up to 100 domU''s per server.It depends :) If all of your 100 domU''s are going to sit at an average of 50% CPU, then you need 50 cores. It may still be cheaper to buy a 50 core box than 6 x 8 core boxes though... A HP DL580G5 with 16 cores, 64GB of RAM, and ~1TB of RAID5 disk will set you back about AUD$50K. James _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Sun, 23 Mar 2008, Andy Smith wrote:> server). > > If you want to put 100 general purpose servers on 1 piece of > hardware I suggest you look into a lot more than 4 disks, and > probably look at 10kRPM 2.5" SAS as well, as opposed to what I am > guessing are commodity 7200RPM 3.5" SATA disks.That''s an interesting point. 1U cases for 2.5" drives are not common, but Intel has a SR1550 line of servers (I have one, have tested it, but not put it into production yet) that will fit EIGHT 2.5" drives. The active SAS backplane uses the megaraid_sas driver which is supported under RHEL since the late 4.x releases (4.4?)> Check the iowait % in your domains and dom0 - if it is more than a > few percent then it''s IO you are needing i.e. more disks. I always > run out of IO before CPU or RAM too, it''s pretty common.Another good suggestion. If you want %cpu stats from boot, they are in /proc/status (which is almost certainly where top gets them, and then shows you the relative changes)... I think they are measured in jiffies which may be 1/100th of a second, but this depends on your architecture. [root@copper /home/virtuals/html/bmd]# head -1 /proc/stat cpu 24113961 21788047 10244198 240697056 28184021 178598 92713 0 user nice system idle iowait irq softirq steal [root@copper /usr/src]# uptime 11:52:25 up 37 days, 15:39, 9 users, load average: 1.19, 1.14, 1.12 ok, lets convert aproximate uptime to jiffies (37 * 24 + 15 ) * 3600 * 100 = 325080000 (jiffies) 240697056/325080000*100 = 74 % idle 28184021/325080000*100 = 8.7 % i/o interesting. I bet it is the morning indexing job that kicks up the average I/O wait times. This machine re-builds a swish index of over 450,000 email conversations (our tech support logs) every morning. Hhmm, and the load average is up due to a looping email message. Gotta go! :) -Tom _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> If you want to put 100 general purpose servers on 1 piece of > hardware I suggest you look into a lot more than 4 disks, and > probably look at 10kRPM 2.5" SAS as well, as opposed to what I am > guessing are commodity 7200RPM 3.5" SATA disks. > > Check the iowait % in your domains and dom0 - if it is more than a > few percent then it''s IO you are needing i.e. more disks. I always > run out of IO before CPU or RAM too, it''s pretty common.Thanks for all the great replies on this, very helpful. Turns out the iowait on some domUs are higher than 50%. This makes sense since most of what these domU''s are doing is serving very large files. For the next system I am building this week, I''m looking into multiple disks on the physical server. I will probably start with 16x 7200rpm SATA. However, we will be building many more of these and I am wondering what other systems are in common use. Is there a straightforward way that to set this up in bulk? Should I be looking into some type NFS or NAS/SAN? I don''t have experience setting up anything but drives on the local physical server, but I can learn whatever I need to. What is the Xen way for large storage needs with lots and lots of domU''s? Thanks, -=nathan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Each domU has 50Gb space. SCSI / SAS drives don''t work for us because of the cost. These are torrent servers, each domU constantly uploading and downloading many random parts of large files. Each domU averages about about 1Mbps transfer speed. Currently hovering around 30 - 50Mbps transfer speed on 40 domU''s. Each domU does not have to be super high performance, just usable. I am interested in what other organizations are doing to host large numbers of virtual private servers. We are interested in using lots of inexpensive commodity hardware, rather than a high end hardware solution. I would suggest something like having a (or a few) 10Gb cards in your> server. >Our current plan is to shift from about 10 domU''s per 7200rmp sata hard disk to 4 domU''s... Are you saying that buying lots more commodity disks per physical server (12 instead of 4 serving 50 domU''s) may not be enough? I do like the idea of separating out the storage over a local area network. Should I be looking into GFS for this? On Thu, Mar 27, 2008 at 10:46 AM, Brian Stempin <brian.stempin@gmail.com> wrote:> I would imagine lots and lots of SAN or iSCSI cards. > > It sounds like you''re in need of more storage bandwidth than a local RAID > or disk controller can provide. I would suggest something like having a (or > a few) 10Gb cards in your server. You can then have the Dom0 mount a series > of iSCSI (AoE, whatever storage tech you want) targets and, and then pass > them to the DomUs as block devices. > > It may also be helpful to the list to have a general idea of what the > DomUs are doing in order to help you identify other bottlenecks, etc. > > On Tue, Mar 25, 2008 at 10:01 PM, n8&abby <thoughtobject@gmail.com> wrote: > > > > If you want to put 100 general purpose servers on 1 piece of > > > hardware I suggest you look into a lot more than 4 disks, and > > > probably look at 10kRPM 2.5" SAS as well, as opposed to what I am > > > guessing are commodity 7200RPM 3.5" SATA disks. > > > > > > Check the iowait % in your domains and dom0 - if it is more than a > > > few percent then it''s IO you are needing i.e. more disks. I always > > > run out of IO before CPU or RAM too, it''s pretty common. > > > > Thanks for all the great replies on this, very helpful. Turns out the > > iowait on some domUs are higher than 50%. This makes sense since most > > of what these domU''s are doing is serving very large files. > > > > For the next system I am building this week, I''m looking into multiple > > disks on the physical server. I will probably start with 16x 7200rpm > > SATA. > > > > However, we will be building many more of these and I am wondering > > what other systems are in common use. Is there a straightforward way > > that to set this up in bulk? Should I be looking into some type NFS > > or NAS/SAN? I don''t have experience setting up anything but drives on > > the local physical server, but I can learn whatever I need to. What > > is the Xen way for large storage needs with lots and lots of domU''s? > > > > Thanks, > > -=nathan > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > > http://lists.xensource.com/xen-users > > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
whoops...forgot to copy the list. ---------- Forwarded message ---------- From: Brian Stempin <brian.stempin@gmail.com> Date: Fri, Mar 28, 2008 at 12:56 AM Subject: Re: [Xen-users] domU''s overloaded To: n8&abby <thoughtobject@gmail.com> I would suggest something like having a (or a few) 10Gb cards in your> > server. > > > > Our current plan is to shift from about 10 domU''s per 7200rmp sata hard > disk to 4 domU''s... Are you saying that buying lots more commodity disks > per physical server (12 instead of 4 serving 50 domU''s) may not be enough? > I do like the idea of separating out the storage over a local area network. > Should I be looking into GFS for this?Since you''re downloading torrents, your doing a lot of random writes/reads, which puts more of a strain on the disks than a lot of synchronous reads/writes. Having having more physical disks should help, but I have no way of knowing whether or not this will make the impact that you need it to. It will also depend on things like your RAID controller, FSB speed, etc. My suggestion was going in the direction of having one or many boxes that have one or many disk arrays attached. You could then, over iSCSI, connect your DomUs to their disk space. GFS would be useful if you have all of your DomUs writing to the same file system. Otherwise, it''s not needed. On Thu, Mar 27, 2008 at 5:36 PM, n8&abby <thoughtobject@gmail.com> wrote:> Each domU has 50Gb space. SCSI / SAS drives don''t work for us because of > the cost. These are torrent servers, each domU constantly uploading and > downloading many random parts of large files. Each domU averages about > about 1Mbps transfer speed. Currently hovering around 30 - 50Mbps transfer > speed on 40 domU''s. > > Each domU does not have to be super high performance, just usable. > > I am interested in what other organizations are doing to host large > numbers of virtual private servers. We are interested in using lots of > inexpensive commodity hardware, rather than a high end hardware solution. > > I would suggest something like having a (or a few) 10Gb cards in your > > server. > > > > Our current plan is to shift from about 10 domU''s per 7200rmp sata hard > disk to 4 domU''s... Are you saying that buying lots more commodity disks > per physical server (12 instead of 4 serving 50 domU''s) may not be enough? > I do like the idea of separating out the storage over a local area network. > Should I be looking into GFS for this? > > > > > On Thu, Mar 27, 2008 at 10:46 AM, Brian Stempin <brian.stempin@gmail.com> > wrote: > > > I would imagine lots and lots of SAN or iSCSI cards. > > > > It sounds like you''re in need of more storage bandwidth than a local > > RAID or disk controller can provide. I would suggest something like having > > a (or a few) 10Gb cards in your server. You can then have the Dom0 mount a > > series of iSCSI (AoE, whatever storage tech you want) targets and, and then > > pass them to the DomUs as block devices. > > > > It may also be helpful to the list to have a general idea of what the > > DomUs are doing in order to help you identify other bottlenecks, etc. > > > > On Tue, Mar 25, 2008 at 10:01 PM, n8&abby <thoughtobject@gmail.com> > > wrote: > > > > > > If you want to put 100 general purpose servers on 1 piece of > > > > hardware I suggest you look into a lot more than 4 disks, and > > > > probably look at 10kRPM 2.5" SAS as well, as opposed to what I am > > > > guessing are commodity 7200RPM 3.5" SATA disks. > > > > > > > > Check the iowait % in your domains and dom0 - if it is more than a > > > > few percent then it''s IO you are needing i.e. more disks. I always > > > > run out of IO before CPU or RAM too, it''s pretty common. > > > > > > Thanks for all the great replies on this, very helpful. Turns out the > > > iowait on some domUs are higher than 50%. This makes sense since most > > > of what these domU''s are doing is serving very large files. > > > > > > For the next system I am building this week, I''m looking into multiple > > > disks on the physical server. I will probably start with 16x 7200rpm > > > SATA. > > > > > > However, we will be building many more of these and I am wondering > > > what other systems are in common use. Is there a straightforward way > > > that to set this up in bulk? Should I be looking into some type NFS > > > or NAS/SAN? I don''t have experience setting up anything but drives on > > > the local physical server, but I can learn whatever I need to. What > > > is the Xen way for large storage needs with lots and lots of domU''s? > > > > > > Thanks, > > > -=nathan > > > > > > _______________________________________________ > > > Xen-users mailing list > > > Xen-users@lists.xensource.com > > > http://lists.xensource.com/xen-users > > > > > > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users