Anyone have any tips on optimizing CentOS for best I/O when it''s being used as a xen server. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, I don''t know about just I/O but in general I do; /etc/grub after the kernal line at the end add dom0=512M after the module vmlinuz at the end of that line add nosmp then in /etc/xen/xend-config.sxp add (dom0-min-mem 0) These mods have made my dom0 no longer pause at wierd times of the day and allow faster performance of my domUs. - Brian On Jan 21, 2009, at 4:07 PM, lists@grounded.net wrote:> Anyone have any tips on optimizing CentOS for best I/O when it''s > being used as a xen server. > > Mike > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> On Jan 21, 2009, at 4:07 PM, lists@grounded.net wrote: > >> Anyone have any tips on optimizing CentOS for best I/O when it''s being >> used as a xen server. >> >> Mike > > I don''t know about just I/O but in general I do; > > /etc/grub > > after the kernal line at the end add dom0=512M > after the module vmlinuz at the end of that line add nosmp > > then in /etc/xen/xend-config.sxp add > (dom0-min-mem 0) > > These mods have made my dom0 no longer pause at wierd times of the day and > allow faster performance of my domUs. > > - Brian >But won''t adding "nosmp" make your server only use 1 CPU(-core)? Seems like a terrible waste. Regards, Vidar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>> On Jan 21, 2009, at 4:07 PM, lists@grounded.net wrote: >> >>> Anyone have any tips on optimizing CentOS for best I/O when it''s being >>> used as a xen server. >>> >>> Mike >> >> I don''t know about just I/O but in general I do; >> >> /etc/grub >> >> after the kernal line at the end add dom0=512M >> after the module vmlinuz at the end of that line add nosmp >> >> then in /etc/xen/xend-config.sxp add >> (dom0-min-mem 0) >> >> These mods have made my dom0 no longer pause at wierd times of the day >> and >> allow faster performance of my domUs. >> >> - Brian >> > > But won''t adding "nosmp" make your server only use 1 CPU(-core)? > Seems like a terrible waste. > > Regards, > Vidar >It will only make the Dom0 1-core. Most people pin their Dom0 to a dedicated core so there is no reason to enable the smp code if you know you are only using 1 core. Ryan _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
My question was really meant to ask about I/O, in as far as file transferring between main host and network for host and guests but anything is good. Just trying to pull all my questions and notes together so that I can get on this in a week or two and it''s good to see folks sharing their ideas, methods etc. So for example, on a system that''s pretty much RPM based, what tweaks can someone make to the various configurations files which would greatly help overall network I/O. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I''ve found the biggest issue with virtualization is disk I/O. NIC I/O I have not seen much of an issue especially if you are using a GB nic. If you are having issues with NIC IO this would indicate you are possibly approaching 120MB/sec. Although use separate NICs for your different networks or bond them with ALB can help. If you are using NFS or iSCSI storage use different NICs than your guest networks. Also a good quality switch can assist as well, even sometimes overlooked. A good quality HP 1800 series switch isn''t expensive at all. I''ve seen some tests that suggest Intel NICs have less latency, almost half, than most others. In most situations I find running a RAID 1 / RAID 10 and using less than 5 VMs per partition is a good rule of thumb to stay away from disk contention issues. Also using iSCSI and DRBD can assist in speed as this would dedicate a server to handling disk IO. These services can also use much of the ram as cache. Stay away from the *fake* RAID stuff or even the cheap RAID controllers. Buy the better later gen 3WARE, LSI, Areca controllers or just use software RAID. Also format the partition XFS and set the noatime flag. The WD RE3/2/Raptor drives are incredibly fast especially in a RAID 1. -Craig lists@grounded.net wrote:> My question was really meant to ask about I/O, in as far as file transferring between main host and network for host and guests but anything is good. > > Just trying to pull all my questions and notes together so that I can get on this in a week or two and it''s good to see folks sharing their ideas, methods etc. > > So for example, on a system that''s pretty much RPM based, what tweaks can someone make to the various configurations files which would greatly help overall network I/O. > > Mike > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
If possible, add as many disks to the machine as it can take, and spread the VM''s out across the disks / partitions. Or, if you can, setup RAID 10 to help load the IO of different data onto different disks / controllers. Don''t use IDE, and try and get the fastest disks for your budget. SATA II isn''t that much more expensive than IDE. Or if you can afford it, and the mobo can handle it, get SCSI or SAS drives. On Fri, Jan 23, 2009 at 7:03 AM, Craig Herring <craigeherring@gmail.com> wrote:> I''ve found the biggest issue with virtualization is disk I/O. NIC I/O I have > not seen much of an issue especially if you are using a GB nic. If you are > having issues with NIC IO this would indicate you are possibly approaching > 120MB/sec. Although use separate NICs for your different networks or bond > them with ALB can help. If you are using NFS or iSCSI storage use different > NICs than your guest networks. Also a good quality switch can assist as > well, even sometimes overlooked. A good quality HP 1800 series switch isn''t > expensive at all. I''ve seen some tests that suggest Intel NICs have less > latency, almost half, than most others. > > In most situations I find running a RAID 1 / RAID 10 and using less than 5 > VMs per partition is a good rule of thumb to stay away from disk contention > issues. Also using iSCSI and DRBD can assist in speed as this would dedicate > a server to handling disk IO. These services can also use much of the ram as > cache. Stay away from the *fake* RAID stuff or even the cheap RAID > controllers. Buy the better later gen 3WARE, LSI, Areca controllers or just > use software RAID. Also format the partition XFS and set the noatime flag. > The WD RE3/2/Raptor drives are incredibly fast especially in a RAID 1. > > -Craig > > lists@grounded.net wrote: >> >> My question was really meant to ask about I/O, in as far as file >> transferring between main host and network for host and guests but anything >> is good. >> Just trying to pull all my questions and notes together so that I can get >> on this in a week or two and it''s good to see folks sharing their ideas, >> methods etc. >> >> So for example, on a system that''s pretty much RPM based, what tweaks can >> someone make to the various configurations files which would greatly help >> overall network I/O. >> >> Mike >> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Kind Regards Rudi Ahlers _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Some very good advice below. If you have the budget for a decent san type box for storage then Infiniband + RDMA + ISCSI + DRBD on two mirrored boxes should allow for excellent performance and easy failover. Also, I cannot stress enough the importance of a decent raid card, spread the VMs across multiple raid 1 arrays and a decent SAS card should let you mix and match SAS and SATA drive arrays which is often convenient. -----Original Message----- From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Rudi Ahlers Sent: 23 January 2009 12:53 To: Craig Herring Cc: xen-users Subject: Re: [Xen-users] Optimizing I/O If possible, add as many disks to the machine as it can take, and spread the VM''s out across the disks / partitions. Or, if you can, setup RAID 10 to help load the IO of different data onto different disks / controllers. Don''t use IDE, and try and get the fastest disks for your budget. SATA II isn''t that much more expensive than IDE. Or if you can afford it, and the mobo can handle it, get SCSI or SAS drives. On Fri, Jan 23, 2009 at 7:03 AM, Craig Herring <craigeherring@gmail.com> wrote:> I''ve found the biggest issue with virtualization is disk I/O. NIC I/OI have> not seen much of an issue especially if you are using a GB nic. If youare> having issues with NIC IO this would indicate you are possiblyapproaching> 120MB/sec. Although use separate NICs for your different networks orbond> them with ALB can help. If you are using NFS or iSCSI storage usedifferent> NICs than your guest networks. Also a good quality switch can assistas> well, even sometimes overlooked. A good quality HP 1800 series switchisn''t> expensive at all. I''ve seen some tests that suggest Intel NICs haveless> latency, almost half, than most others. > > In most situations I find running a RAID 1 / RAID 10 and using lessthan 5> VMs per partition is a good rule of thumb to stay away from diskcontention> issues. Also using iSCSI and DRBD can assist in speed as this woulddedicate> a server to handling disk IO. These services can also use much of theram as> cache. Stay away from the *fake* RAID stuff or even the cheap RAID > controllers. Buy the better later gen 3WARE, LSI, Areca controllers orjust> use software RAID. Also format the partition XFS and set the noatimeflag.> The WD RE3/2/Raptor drives are incredibly fast especially in a RAID 1. > > -Craig > > lists@grounded.net wrote: >> >> My question was really meant to ask about I/O, in as far as file >> transferring between main host and network for host and guests butanything>> is good. >> Just trying to pull all my questions and notes together so that I canget>> on this in a week or two and it''s good to see folks sharing theirideas,>> methods etc. >> >> So for example, on a system that''s pretty much RPM based, what tweakscan>> someone make to the various configurations files which would greatlyhelp>> overall network I/O. >> >> Mike >> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Kind Regards Rudi Ahlers _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. ISPA Member Find us in http://www.thebestof.co.uk/petersfield _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Jan 23, 2009 at 2:13 PM, Robert Dunkley <Robert@saq.co.uk> wrote:> Some very good advice below. If you have the budget for a decent san > type box for storage then Infiniband + RDMA + ISCSI + DRBD on two > mirrored boxes should allow for excellent performance and easy failover. >Hello, we have 7 Servers with about 30 VM spread accross these servers. Would a iSCSI device with 1Gb/1GB Interface enaough to hold up to 100VM? Or will the IO be not enough? Most VM do serve websites, but some have heavy mysql usage. thx> > Also, I cannot stress enough the importance of a decent raid card, > spread the VMs across multiple raid 1 arrays and a decent SAS card > should let you mix and match SAS and SATA drive arrays which is often > convenient. > > -----Original Message----- > From: xen-users-bounces@lists.xensource.com > [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Rudi Ahlers > Sent: 23 January 2009 12:53 > To: Craig Herring > Cc: xen-users > Subject: Re: [Xen-users] Optimizing I/O > > If possible, add as many disks to the machine as it can take, and > spread the VM''s out across the disks / partitions. > > Or, if you can, setup RAID 10 to help load the IO of different data > onto different disks / controllers. Don''t use IDE, and try and get the > fastest disks for your budget. SATA II isn''t that much more expensive > than IDE. Or if you can afford it, and the mobo can handle it, get > SCSI or SAS drives. > > On Fri, Jan 23, 2009 at 7:03 AM, Craig Herring <craigeherring@gmail.com> > wrote: >> I''ve found the biggest issue with virtualization is disk I/O. NIC I/O > I have >> not seen much of an issue especially if you are using a GB nic. If you > are >> having issues with NIC IO this would indicate you are possibly > approaching >> 120MB/sec. Although use separate NICs for your different networks or > bond >> them with ALB can help. If you are using NFS or iSCSI storage use > different >> NICs than your guest networks. Also a good quality switch can assist > as >> well, even sometimes overlooked. A good quality HP 1800 series switch > isn''t >> expensive at all. I''ve seen some tests that suggest Intel NICs have > less >> latency, almost half, than most others. >> >> In most situations I find running a RAID 1 / RAID 10 and using less > than 5 >> VMs per partition is a good rule of thumb to stay away from disk > contention >> issues. Also using iSCSI and DRBD can assist in speed as this would > dedicate >> a server to handling disk IO. These services can also use much of the > ram as >> cache. Stay away from the *fake* RAID stuff or even the cheap RAID >> controllers. Buy the better later gen 3WARE, LSI, Areca controllers or > just >> use software RAID. Also format the partition XFS and set the noatime > flag. >> The WD RE3/2/Raptor drives are incredibly fast especially in a RAID 1. >> >> -Craig >> >> lists@grounded.net wrote: >>> >>> My question was really meant to ask about I/O, in as far as file >>> transferring between main host and network for host and guests but > anything >>> is good. >>> Just trying to pull all my questions and notes together so that I can > get >>> on this in a week or two and it''s good to see folks sharing their > ideas, >>> methods etc. >>> >>> So for example, on a system that''s pretty much RPM based, what tweaks > can >>> someone make to the various configurations files which would greatly > help >>> overall network I/O. >>> >>> Mike >>> >>> >>> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@lists.xensource.com >>> http://lists.xensource.com/xen-users >>> >> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > > > > -- > > Kind Regards > Rudi Ahlers > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users > > The SAQ Group > > Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ > SAQ is the trading name of SEMTEC Limited. Registered in England & Wales > Company Number: 06481952 > > http://www.saqnet.co.uk AS29219 > > SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. > > Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. > > ISPA Member > > Find us in http://www.thebestof.co.uk/petersfield > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, As always it depends on what kinds of workload these machines do. Webserving alone is usually not a very (disk) IO intensive task as the most commonly hit files are kept in memory. Databases are a different story however as are fileservers (naturally) It might make sense to separate the databases onto a separate array or run those on local disks. What I''m doing is have all webservers on DomU''s off an iSCSI array and have the databases in separate DomU''s running off the servers'' local disks. This keeps the performance of the databases predictable, latency low and takes a big chunk of IO out of the iSCSI. Regards, Barry> > we have 7 Servers with about 30 VM spread accross these servers. > Would a iSCSI device with 1Gb/1GB Interface enaough to hold up to 100VM? > Or will the IO be not enough? Most VM do serve websites, but some have > heavy mysql usage. > > thx > >> >> Also, I cannot stress enough the importance of a decent raid card, >> spread the VMs across multiple raid 1 arrays and a decent SAS card >> should let you mix and match SAS and SATA drive arrays which is often >> convenient. >> >> -----Original Message----- >> From: xen-users-bounces@lists.xensource.com >> [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Rudi Ahlers >> Sent: 23 January 2009 12:53 >> To: Craig Herring >> Cc: xen-users >> Subject: Re: [Xen-users] Optimizing I/O >> >> If possible, add as many disks to the machine as it can take, and >> spread the VM''s out across the disks / partitions. >> >> Or, if you can, setup RAID 10 to help load the IO of different data >> onto different disks / controllers. Don''t use IDE, and try and get the >> fastest disks for your budget. SATA II isn''t that much more expensive >> than IDE. Or if you can afford it, and the mobo can handle it, get >> SCSI or SAS drives. >> >> On Fri, Jan 23, 2009 at 7:03 AM, Craig Herring <craigeherring@gmail.com> >> wrote: >>> I''ve found the biggest issue with virtualization is disk I/O. NIC I/O >> I have >>> not seen much of an issue especially if you are using a GB nic. If you >> are >>> having issues with NIC IO this would indicate you are possibly >> approaching >>> 120MB/sec. Although use separate NICs for your different networks or >> bond >>> them with ALB can help. If you are using NFS or iSCSI storage use >> different >>> NICs than your guest networks. Also a good quality switch can assist >> as >>> well, even sometimes overlooked. A good quality HP 1800 series switch >> isn''t >>> expensive at all. I''ve seen some tests that suggest Intel NICs have >> less >>> latency, almost half, than most others. >>> >>> In most situations I find running a RAID 1 / RAID 10 and using less >> than 5 >>> VMs per partition is a good rule of thumb to stay away from disk >> contention >>> issues. Also using iSCSI and DRBD can assist in speed as this would >> dedicate >>> a server to handling disk IO. These services can also use much of the >> ram as >>> cache. Stay away from the *fake* RAID stuff or even the cheap RAID >>> controllers. Buy the better later gen 3WARE, LSI, Areca controllers or >> just >>> use software RAID. Also format the partition XFS and set the noatime >> flag. >>> The WD RE3/2/Raptor drives are incredibly fast especially in a RAID 1. >>> >>> -Craig >>> >>> lists@grounded.net wrote: >>>> >>>> My question was really meant to ask about I/O, in as far as file >>>> transferring between main host and network for host and guests but >> anything >>>> is good. >>>> Just trying to pull all my questions and notes together so that I can >> get >>>> on this in a week or two and it''s good to see folks sharing their >> ideas, >>>> methods etc. >>>> >>>> So for example, on a system that''s pretty much RPM based, what tweaks >> can >>>> someone make to the various configurations files which would greatly >> help >>>> overall network I/O. >>>> >>>> Mike >>>> >>>> >>>> _______________________________________________ >>>> Xen-users mailing list >>>> Xen-users@lists.xensource.com >>>> http://lists.xensource.com/xen-users >>>> >>> >>> >>> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@lists.xensource.com >>> http://lists.xensource.com/xen-users >>> >> >> >> >> -- >> >> Kind Regards >> Rudi Ahlers >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> >> The SAQ Group >> >> Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ >> SAQ is the trading name of SEMTEC Limited. Registered in England & Wales >> Company Number: 06481952 >> >> http://www.saqnet.co.uk AS29219 >> >> SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. >> >> Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. >> >> ISPA Member >> >> Find us in http://www.thebestof.co.uk/petersfield >> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users >> > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Barry van Someren --------------------------------------- Email: barry@bvansomeren.com Email: goltharnl@gmail.com _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> Or, if you can, setup RAID 10 to help load the IO of different data > onto different disks / controllers. Don''t use IDE, and try and get the > fastest disks for your budget. SATA II isn''t that much more expensive > than IDE. Or if you can afford it, and the mobo can handle it, get > SCSI or SAS drives.In the case of blades, you''re kinda stuck with what you have and in this case, the blades use either SCSI or 2.5 IDE drives. Depending on how many blades I end up using, I might be able to go with fast SCSI drives but if not, then the OS at least will have to be on IDE. Storage for the blades such as work and shared will usally be network storage. How about on tweaking linux though, are there any tweaks? Craig suggested noatime which is a good point that''s helped me with GFS in the past, so aside from hardware, what about software tweaks that aren''t dangerous but greatly help? Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>we have 7 Servers with about 30 VM spread accross these servers. >Would a iSCSI device with 1Gb/1GB Interface enaough to hold up to 100VM? >Or will the IO be not enough? Most VM do serve websites, but some have >heavy mysql usage.The bit of testing I''ve done to date lead me to I/O bottlenecks when it comes to shared storage which is why I started this thread. Figure I''m not alone in trying to find a good balance of hardware but even more important, OS software tweaks as well. This is something I used to use, not sure it applies to newer kernels though. In /etc/sysctl.conf net.ipv4.tcp_fin_timeout = 30 net.ipv4.tcp_keepalive_time = 1800 net.ipv4.tcp_window_scaling = 0 net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
I found this article. Sort of old but there is talk on the net about noatime being unsafe to use. Does anyone have any input on this as one would certainly not want to corrupt anything in an important environment such as a virtual one. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> What I''m doing is have all webservers on DomU''s off an iSCSI array and > have the databases in separate DomU''s running off the servers'' local > disks. > This keeps the performance of the databases predictable, latency low > and takes a big chunk of IO out of the iSCSI.This is an area where I am having conflicting thoughts. In testing, I''ve found that the only real way of serving LAMP services is distributed stand alone servers rather than on shared environments such as virtual servers. The speed differences are instantly obvious when connecting to a VM based server compared to stand alone. The cost is a bit higher but who wants to suffer slowness on LAMP services? It seems to me that the best machines to virtualize are only machines which aren''t required in such a demanding role as web/mysql servers are. Maybe I''m missing something. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Fri, Jan 23, 2009 at 11:05 PM, lists@grounded.net <lists@grounded.net> wrote:>> What I''m doing is have all webservers on DomU''s off an iSCSI array and >> have the databases in separate DomU''s running off the servers'' local >> disks. >> This keeps the performance of the databases predictable, latency low >> and takes a big chunk of IO out of the iSCSI. > > This is an area where I am having conflicting thoughts. > In testing, I''ve found that the only real way of serving LAMP services is distributed stand alone servers rather than on shared environments such as virtual servers. The speed differences are instantly obvious when connecting to a VM based server compared to stand alone.Again, you caught me by surprise here. The newbie vs experienced thing :) Seems like you''re a seasoned VM users. In general, I''d say any I/O optimization you previously used on other virtualization platform or standalone boxes should also work on Xen. This includes noatime, which makes huge difference when you''re mostly serving static content (since you didn''t give the article link that recommends against it, all I can say for now is "noatime is good"). Regarding VM vs stand-alone, I''m not sure about your setup (were you still using vmware?), but in my experience when serving the same load, a VM server (with local storage) is comparable to stand alone machine with the same resource and load. Now if we''re talking about : - using shared storage, or - converting several stand alone boxes into one using VM without adding resource Then yes, your point about "instantly obvious" is valid.> > The cost is a bit higher but who wants to suffer slowness on LAMP services? > > It seems to me that the best machines to virtualize are only machines which aren''t required in such a demanding role as web/mysql servers are. Maybe I''m missing something.Best machines to virtualize are those which aren''t using resource close to 100%. By virtualizing them you increase utilization and reducing idle resource, thus saving money. I think that was the basic principle. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>> The speed differences are instantly obvious when >> connecting to a VM based server compared to stand alone. >> > Again, you caught me by surprise here. The newbie vs experienced thing :) > Seems like you''re a seasoned VM users. In general, I''d say any I/O > optimization you previously used on other virtualization platform or > standalone boxes should also work on Xen. This includes noatime, which > makes huge difference when you''re mostly serving static content (since > you didn''t give the article link that recommends against it, all I can > say for now is "noatime is good").Yes, and now I''ve lost the URL, sorry about that. Basically, it talked about a guy doing tests and finding that with noatime, the data could become inconsistent and other things. So, in your experience, this has not been a problem? The thing about optimizing I/O is that the guests are still really just data on Ethernet ports, no matter how you cut it. Whether I ran many instances of a web server or multiple separate guests, does it not always add up to more resources being used from the host? Just seems that you''d want to have stand alone servers working full bore on serving up web pages in a distributed manner rather than just taxing a host with guests and all the extras needed for similar redundancy. Guess I''ll have to run some tests once I get everything together and see what happens.> Regarding VM vs stand-alone, I''m not sure about your setup (were you > still using vmware?), but in my experience when serving the same load, > a VM server (with local storage) is comparable to stand alone machine > with the same resource and load.My win machines are on VMware but the reason I''m setting up xen is to move my linux servers over toe xen. I wasn''t really thinking of web servers but it won''t hurt to run some tests once I have the environment together.> Now if we''re talking about : > - using shared storage, or > - converting several stand alone boxes into one using VM without adding > resource, Then yes, your point about "instantly obvious" is valid.Well, shared storage on the host for the guests but the web servers would serve from network storage.> Best machines to virtualize are those which aren''t using resource > close to 100%. By virtualizing them you increase utilization and > reducing idle resource, thus saving money. I think that was the basic > principle.That''s pretty much been my approach. The plan is to take all non I/O intensive servers that do a lot of idling and virtualize them. Why have the heat, the power, the drives wearing down, etc when virtualizing them is so very efficient for these. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Jan 26, 2009 at 6:41 AM, lists@grounded.net <lists@grounded.net> wrote:> Basically, it talked about a guy doing tests and finding that with noatime, the data could become inconsistent and other things. So, in your experience, this has not been a problem? >I believe some software would use atime to determine whether or not data should be compressed or not. There''s also some distros that uses atime or mtime to determine which files on /tmp can be deleted. So far there hasn''t been any data corruption or inconsistency that I know of that results from using noatime.> The thing about optimizing I/O is that the guests are still really just data on Ethernet ports, no matter how you cut it.On your setup, yes.> Whether I ran many instances of a web server or multiple separate guests, does it not always add up to more resources being used from the host? Just seems that you''d want to have stand alone servers working full bore on serving up web pages in a distributed manner rather than just taxing a host with guests and all the extras needed for similar redundancy. Guess I''ll have to run some tests once I get everything together and see what happens.That''s only logical. If your primary goal is redundancy and your servers are quite busy, then using virtualization doesn''t help much.>> Now if we''re talking about : >> - using shared storage, or >> - converting several stand alone boxes into one using VM without adding >> resource, Then yes, your point about "instantly obvious" is valid. > > Well, shared storage on the host for the guests but the web servers would serve from network storage.Usually that whould be where the bottle neck is. Using shared storage (SAN/NAS) requires that : - the SAN/NAS box is capable of providing I/O needed for all clients - storage network has enough bandwidth for I/O needs Usually this translates to using lots and lots of 72GB disks and dedicated switches/ports for SAN/NAS. Regards, Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> atime or mtime to determine which files on /tmp can be deleted. So far > there hasn''t been any data corruption or inconsistency that I know of > that results from using noatime.I''ll give it a try and see what happens. The thread was sort of lost :). I was hoping to find some ideas on tuning the OS itself, not just NFS.>> The thing about optimizing I/O is that the guests are still really just >> data on Ethernet ports, no matter how you cut it. >> > On your setup, yes.I only have the one server up right now, I am currently trying to pull all of the pieces together to build my first multiple server xen setup. So, I''m still flexible and listening to input :).> That''s only logical. If your primary goal is redundancy and your > servers are quite busy, then using virtualization doesn''t help much.Good, I didn''t misunderstand something in that idea then.> Usually this translates to using lots and lots of 72GB disks and > dedicated switches/ports for SAN/NAS.I''ve been collecting hardware for a while, I just happen to have lots and lots of exactly these things :). Seriously, I''m just a small developer, who acquired a great deal of hardware over the past few years playing with various technologies. I didn''t become a pro at any one of them but learned a lot of basics and conceptual ideas. I am hoping to put this experience to work by building a highly reliable setup that can not only handle some redundancy, but more important, actually be somewhat easy to use, expand on, etc. I got myself into a business once where the growth was very fast, we could never keep up and our technology was changing so fast, we needed more power, constantly so it ended up being a nightmare of multiple days awake at a time and too much down time because of the constant changes. I''m hoping to get things right, as much as I can, up front, so that doing something serious with this doesn''t turn into that nightmare again. Mike _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users