My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated: Each Node: Dell R210 Intel X3430 Quad Core 8GB RAM Intel PT 1Gbps Server Dual Port NIC using linux "bonding" Small pair of HDDs for OS (Probably in RAID1) Each node will run about 10 - 15 customer guests Storage Server: Some Intel Quad Core Chip 2GB RAM (Maybe more?) LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) Battery backup for the above RAID controller 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) Each RAID10 array will connect to 2 nodes (8 nodes per storage server) Intel PT 1Gbps Quad port NIC using Linux bonding Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage server), and 8 X 2 port trunk (for the nodes) What you think? Any tips? Thanks _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Storage servers like having loads of RAM as it serves as a really fast cache. You should definently beef up your storage server in terms of RAM. Antoine On Tue, Jun 8, 2010 at 2:55 PM, Jonathan Tripathy <jonnyt@abpni.co.uk>wrote:> My future plan currently looks like this for my VPS hosting solution, so > any feedback would be appreciated: > > Each Node: > Dell R210 Intel X3430 Quad Core 8GB RAM > Intel PT 1Gbps Server Dual Port NIC using linux "bonding" > Small pair of HDDs for OS (Probably in RAID1) > Each node will run about 10 - 15 customer guests > > > Storage Server: > Some Intel Quad Core Chip > 2GB RAM (Maybe more?) > LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) > Battery backup for the above RAID controller > 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) > Each RAID10 array will connect to 2 nodes (8 nodes per storage server) > Intel PT 1Gbps Quad port NIC using Linux bonding > Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) > > HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage > server), and 8 X 2 port trunk (for the nodes) > > What you think? Any tips? > > Thanks > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Yeah that seems like a good idea. 8GB sound ok? My main area of concenr is to use Ethernet for the links. This this is ok given the below setup? ________________________________ From: Antoine Benkemoun [mailto:antoine.benkemoun@gmail.com] Sent: Tue 08/06/2010 14:21 To: Jonathan Tripathy Cc: Xen-users@lists.xensource.com Subject: Re: [Xen-users] My future plan Storage servers like having loads of RAM as it serves as a really fast cache. You should definently beef up your storage server in terms of RAM. Antoine On Tue, Jun 8, 2010 at 2:55 PM, Jonathan Tripathy <jonnyt@abpni.co.uk> wrote: My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated: Each Node: Dell R210 Intel X3430 Quad Core 8GB RAM Intel PT 1Gbps Server Dual Port NIC using linux "bonding" Small pair of HDDs for OS (Probably in RAID1) Each node will run about 10 - 15 customer guests Storage Server: Some Intel Quad Core Chip 2GB RAM (Maybe more?) LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) Battery backup for the above RAID controller 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) Each RAID10 array will connect to 2 nodes (8 nodes per storage server) Intel PT 1Gbps Quad port NIC using Linux bonding Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage server), and 8 X 2 port trunk (for the nodes) What you think? Any tips? Thanks _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Jonathan, you should think about flash or SD cards as xen-boot-drive. This provides you lower costs and higher energy efficiency. If you mount /tmp and /var/log to an tmpfs, this disks works very well and long. If you dont need so much disk space for your storage, use sas disks. SAS (10k/15k) disks provides you many more IOPs than sata disks (more IOPS per $/EUR as well). And very important: A very large cache for your raid controller. Intel e1000e is a pretty good choice. This cards have a large buffer and generates just a few interrupts on your CPUs (in comparison to the Broadcom NICs). Best Regards Michael Schmidt Am 08.06.10 14:55, schrieb Jonathan Tripathy:> My future plan currently looks like this for my VPS hosting solution, > so any feedback would be appreciated: > Each Node: > Dell R210 Intel X3430 Quad Core 8GB RAM > Intel PT 1Gbps Server Dual Port NIC using linux "bonding" > Small pair of HDDs for OS (Probably in RAID1) > Each node will run about 10 - 15 customer guests > Storage Server: > Some Intel Quad Core Chip > 2GB RAM (Maybe more?) > LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) > Battery backup for the above RAID controller > 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) > Each RAID10 array will connect to 2 nodes (8 nodes per storage server) > Intel PT 1Gbps Quad port NIC using Linux bonding > Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) > HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage > server), and 8 X 2 port trunk (for the nodes) > What you think? Any tips? > Thanks > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Michael, Thanks for the tips using SSD for the node OS drives. Regarding the NIC, I was thinking about using this for the nodes: http://www.intel.com/products/server/adapters/pro1000pt-dualport/pro1000pt-dualport-overview.htm and this for the server: http://www.intel.com/products/server/adapters/pro1000pt-quadport-low-profile/pro1000pt-quadport-low-profile-overview.htm Are those the cards you were talking about? They are very cheap on ebay you see... Think 4 port bonding for the server is good enough for 8 nodes? Thanks ________________________________ From: Michael Schmidt [mailto:michael.schmidt@xncore.com] Sent: Tue 08/06/2010 14:49 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: Re: [Xen-users] My future plan Hi Jonathan, you should think about flash or SD cards as xen-boot-drive. This provides you lower costs and higher energy efficiency. If you mount /tmp and /var/log to an tmpfs, this disks works very well and long. If you dont need so much disk space for your storage, use sas disks. SAS (10k/15k) disks provides you many more IOPs than sata disks (more IOPS per $/EUR as well). And very important: A very large cache for your raid controller. Intel e1000e is a pretty good choice. This cards have a large buffer and generates just a few interrupts on your CPUs (in comparison to the Broadcom NICs). Best Regards Michael Schmidt Am 08.06.10 14:55, schrieb Jonathan Tripathy: My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated: Each Node: Dell R210 Intel X3430 Quad Core 8GB RAM Intel PT 1Gbps Server Dual Port NIC using linux "bonding" Small pair of HDDs for OS (Probably in RAID1) Each node will run about 10 - 15 customer guests Storage Server: Some Intel Quad Core Chip 2GB RAM (Maybe more?) LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) Battery backup for the above RAID controller 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) Each RAID10 array will connect to 2 nodes (8 nodes per storage server) Intel PT 1Gbps Quad port NIC using Linux bonding Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage server), and 8 X 2 port trunk (for the nodes) What you think? Any tips? Thanks _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Jonathan, Michael Schmidt wrote:> Hi Jonathan, > > you should think about flash or SD cards as xen-boot-drive. > This provides you lower costs and higher energy efficiency. > If you mount /tmp and /var/log to an tmpfs, this disks works very well > and long.Be careful mounting a tmpfs on /var/log. If you''re running on SSDs it''s good to minimize disk writes but in the event of a nasty error that brings down your system you won''t have any error logs to let you know what happened when it comes back up. Best thing to do when you mount tmpfs on /var/log is to make a syslog rule that logs ERROR messages to storage that will survive a reboot. Something like: *.err /var/persistent.log in /etc/rsyslog.conf should work.> If you dont need so much disk space for your storage, use sas disks. > SAS (10k/15k) disks provides you many more IOPs than sata disks (more > IOPS per $/€ as well). > And very important: A very large cache for your raid controller. > > Intel e1000e is a pretty good choice. This cards have a large buffer and > generates just a few interrupts on your CPUs (in comparison to the > Broadcom NICs). > > Best Regards > > Michael Schmidt > > Am 08.06.10 14:55, schrieb Jonathan Tripathy: >> My future plan currently looks like this for my VPS hosting solution, >> so any feedback would be appreciated: >> >> Each Node: >> Dell R210 Intel X3430 Quad Core 8GB RAM >> Intel PT 1Gbps Server Dual Port NIC using linux "bonding" >> Small pair of HDDs for OS (Probably in RAID1) >> Each node will run about 10 - 15 customer guests >> >> >> Storage Server: >> Some Intel Quad Core Chip >> 2GB RAM (Maybe more?) >> LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) >> Battery backup for the above RAID controller >> 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) >> Each RAID10 array will connect to 2 nodes (8 nodes per storage server) >> Intel PT 1Gbps Quad port NIC using Linux bonding >> Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) >> >> HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage >> server), and 8 X 2 port trunk (for the nodes) >> >> What you think? Any tips?Cheers, - Philip _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am 08.06.10 16:12, schrieb philip tricca:> Jonathan, > > Michael Schmidt wrote: >> Hi Jonathan, >> >> you should think about flash or SD cards as xen-boot-drive. >> This provides you lower costs and higher energy efficiency. >> If you mount /tmp and /var/log to an tmpfs, this disks works very >> well and long. > > Be careful mounting a tmpfs on /var/log. If you''re running on SSDs > it''s good to minimize disk writes but in the event of a nasty error > that brings down your system you won''t have any error logs to let you > know what happened when it comes back up. > > Best thing to do when you mount tmpfs on /var/log is to make a syslog > rule that logs ERROR messages to storage that will survive a reboot. > Something like: > > *.err /var/persistent.log > > in /etc/rsyslog.conf should work. >Best practice: Log over the network (with e.g. syslog-ng) to a central syslog server. Best Regards -Michael Schmidt _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Jonathan, i use exactly this cards with success for my loadbalancing (over 2 x 800MBit per node). I think it is a good choice for your iSCSI too. Best Regards Michael Schmidt Am 08.06.10 15:55, schrieb Jonathan Tripathy:> Hi Michael, > Thanks for the tips using SSD for the node OS drives. > Regarding the NIC, I was thinking about using this for the nodes: > http://www.intel.com/products/server/adapters/pro1000pt-dualport/pro1000pt-dualport-overview.htm > and this for the server: > > http://www.intel.com/products/server/adapters/pro1000pt-quadport-low-profile/pro1000pt-quadport-low-profile-overview.htm > Are those the cards you were talking about? They are very cheap on > ebay you see... > Think 4 port bonding for the server is good enough for 8 nodes? > Thanks > > ------------------------------------------------------------------------ > *From:* Michael Schmidt [mailto:michael.schmidt@xncore.com] > *Sent:* Tue 08/06/2010 14:49 > *To:* Jonathan Tripathy; Xen-users@lists.xensource.com > *Subject:* Re: [Xen-users] My future plan > > Hi Jonathan, > > you should think about flash or SD cards as xen-boot-drive. > This provides you lower costs and higher energy efficiency. > If you mount /tmp and /var/log to an tmpfs, this disks works very well > and long. > > If you dont need so much disk space for your storage, use sas disks. > SAS (10k/15k) disks provides you many more IOPs than sata disks (more > IOPS per $/EUR as well). > And very important: A very large cache for your raid controller. > > Intel e1000e is a pretty good choice. This cards have a large buffer > and generates just a few interrupts on your CPUs (in comparison to the > Broadcom NICs). > Best Regards > > Michael Schmidt > > Am 08.06.10 14:55, schrieb Jonathan Tripathy: >> My future plan currently looks like this for my VPS hosting solution, >> so any feedback would be appreciated: >> Each Node: >> Dell R210 Intel X3430 Quad Core 8GB RAM >> Intel PT 1Gbps Server Dual Port NIC using linux "bonding" >> Small pair of HDDs for OS (Probably in RAID1) >> Each node will run about 10 - 15 customer guests >> Storage Server: >> Some Intel Quad Core Chip >> 2GB RAM (Maybe more?) >> LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) >> Battery backup for the above RAID controller >> 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) >> Each RAID10 array will connect to 2 nodes (8 nodes per storage server) >> Intel PT 1Gbps Quad port NIC using Linux bonding >> Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) >> HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage >> server), and 8 X 2 port trunk (for the nodes) >> What you think? Any tips? >> Thanks >> >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xensource.com >> http://lists.xensource.com/xen-users > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
________________________________ From: Michael Schmidt [mailto:michael.schmidt@xncore.com] Sent: Tue 08/06/2010 15:20 To: philip tricca Cc: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: Re: [Xen-users] My future plan Am 08.06.10 16:12, schrieb philip tricca:> Jonathan, > > Michael Schmidt wrote: >> Hi Jonathan, >> >> you should think about flash or SD cards as xen-boot-drive. >> This provides you lower costs and higher energy efficiency. >> If you mount /tmp and /var/log to an tmpfs, this disks works very >> well and long. > > Be careful mounting a tmpfs on /var/log. If you''re running on SSDs > it''s good to minimize disk writes but in the event of a nasty error > that brings down your system you won''t have any error logs to let you > know what happened when it comes back up. > > Best thing to do when you mount tmpfs on /var/log is to make a syslog > rule that logs ERROR messages to storage that will survive a reboot. > Something like: > > *.err /var/persistent.log > > in /etc/rsyslog.conf should work. >Best practice: Log over the network (with e.g. syslog-ng) to a central syslog server. Best Regards -Michael Schmidt --------------------------------------------------------------------------------------------------- Good idea! What about my use of software iSCSI and the NIC setup? This all ok? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Jonathan, Might be worth considering a different raid card, even with simple raid 1 I did not get proper raid 1 random read interleaving performance with an LSI 1068 based controller (Assuming the 1078 is very similar), an IOP based Areca card behaved properly (Only 30% improvement over single drive with LSI but 80% better with Areca, simple Bonnie testing). I was using Centos 5.2 at the time (Integrated drivers). If you are feeling brave maybe a PXE boot could work to save the need for any system drives on the nodes. Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 08 June 2010 13:56 To: Xen-users@lists.xensource.com Subject: [Xen-users] My future plan My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated: Each Node: Dell R210 Intel X3430 Quad Core 8GB RAM Intel PT 1Gbps Server Dual Port NIC using linux "bonding" Small pair of HDDs for OS (Probably in RAID1) Each node will run about 10 - 15 customer guests Storage Server: Some Intel Quad Core Chip 2GB RAM (Maybe more?) LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) Battery backup for the above RAID controller 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) Each RAID10 array will connect to 2 nodes (8 nodes per storage server) Intel PT 1Gbps Quad port NIC using Linux bonding Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage server), and 8 X 2 port trunk (for the nodes) What you think? Any tips? Thanks The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Rob, Do you have any links or anything for cards that you suggest? I''m just a start-up to low cost is very much a good thing here :) But then again, so is having my cake and eating it as well!! The RAID card what was came standard with this server that I was looking at: http://www.broadberry.co.uk/iscsi-san-nas-storage-servers/cyberstore-316s-wss That''s a fantastic idea about the PXE booting! The only thing though, is that Dell supply their server with a minimum of a single HDD as standard, so there would be no cost saving there. And also, all the servers would have to be the same. My idea is that if this was to work out properly, I would get servers better than R210, as these are limited to 16GB of RAM max.. Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Tue 08/06/2010 15:36 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, Might be worth considering a different raid card, even with simple raid 1 I did not get proper raid 1 random read interleaving performance with an LSI 1068 based controller (Assuming the 1078 is very similar), an IOP based Areca card behaved properly (Only 30% improvement over single drive with LSI but 80% better with Areca, simple Bonnie testing). I was using Centos 5.2 at the time (Integrated drivers). If you are feeling brave maybe a PXE boot could work to save the need for any system drives on the nodes. Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 08 June 2010 13:56 To: Xen-users@lists.xensource.com Subject: [Xen-users] My future plan My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated: Each Node: Dell R210 Intel X3430 Quad Core 8GB RAM Intel PT 1Gbps Server Dual Port NIC using linux "bonding" Small pair of HDDs for OS (Probably in RAID1) Each node will run about 10 - 15 customer guests Storage Server: Some Intel Quad Core Chip 2GB RAM (Maybe more?) LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) Battery backup for the above RAID controller 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) Each RAID10 array will connect to 2 nodes (8 nodes per storage server) Intel PT 1Gbps Quad port NIC using Linux bonding Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage server), and 8 X 2 port trunk (for the nodes) What you think? Any tips? Thanks The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Jonathan, The NAS is using good components, make sure you get IPMI option if this is going in a rack more than 5 minutes away from where you work. Ask Broadberry if they can supply the newer SAS 6G expander version of that chassis and the newer 9260-4I 6G raid card (I''m pretty sure it a Supermicro approved card for that chassis), with 16 drives 6G SAS may remove a potential bottleneck to the expander. Also, consider 15K SAS for your high IO database and mailservers, a mix of 15K SAS and 7K SATA arrays might be appropriate. Anything but LSI cards often have issues with the LSI based expanders in those Supermicro chassis, Areca do work with the SAS1 expander as long as SAFTE is disabled but considering the expander I think LSI is the only advisable card brand. Any reason you aren''t considering 1U servers with integrated Intel NIcs for nodes? Often the best band per buck for nodes is with 1U Dual Xeon E55XX quadcore or the new Opteron Octal/Dodeca core systems. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 08 June 2010 15:38 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Rob, Do you have any links or anything for cards that you suggest? I''m just a start-up to low cost is very much a good thing here :) But then again, so is having my cake and eating it as well!! The RAID card what was came standard with this server that I was looking at: http://www.broadberry.co.uk/iscsi-san-nas-storage-servers/cyberstore-316 s-wss That''s a fantastic idea about the PXE booting! The only thing though, is that Dell supply their server with a minimum of a single HDD as standard, so there would be no cost saving there. And also, all the servers would have to be the same. My idea is that if this was to work out properly, I would get servers better than R210, as these are limited to 16GB of RAM max.. Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Tue 08/06/2010 15:36 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, Might be worth considering a different raid card, even with simple raid 1 I did not get proper raid 1 random read interleaving performance with an LSI 1068 based controller (Assuming the 1078 is very similar), an IOP based Areca card behaved properly (Only 30% improvement over single drive with LSI but 80% better with Areca, simple Bonnie testing). I was using Centos 5.2 at the time (Integrated drivers). If you are feeling brave maybe a PXE boot could work to save the need for any system drives on the nodes. Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 08 June 2010 13:56 To: Xen-users@lists.xensource.com Subject: [Xen-users] My future plan My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated: Each Node: Dell R210 Intel X3430 Quad Core 8GB RAM Intel PT 1Gbps Server Dual Port NIC using linux "bonding" Small pair of HDDs for OS (Probably in RAID1) Each node will run about 10 - 15 customer guests Storage Server: Some Intel Quad Core Chip 2GB RAM (Maybe more?) LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) Battery backup for the above RAID controller 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) Each RAID10 array will connect to 2 nodes (8 nodes per storage server) Intel PT 1Gbps Quad port NIC using Linux bonding Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage server), and 8 X 2 port trunk (for the nodes) What you think? Any tips? Thanks The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Rob, Since this is just an idea at this stage, and that we are just starting out, we want to build up our rack over time. The Dell R210 is the best we can afford at the minute. Maybe, after the first 4 or 5 R210, I could look into getting servers with Dual CPUs in them so more guests can run. Initally, each server will be handling its own storage using RAID1. The Dell R210s do come with dual on-board NICs, however I need one of them for the internet connection, unless of course I used VLANs and just used the on-board NICs? I''m very confused about the RAID cards. I''ve never really worked with these before, so all advice is appreciated. With the number of total VMs running around 100, do you think I''ll notice much of a difference between SATA and SAS? Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Tue 08/06/2010 15:56 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, The NAS is using good components, make sure you get IPMI option if this is going in a rack more than 5 minutes away from where you work. Ask Broadberry if they can supply the newer SAS 6G expander version of that chassis and the newer 9260-4I 6G raid card (I''m pretty sure it a Supermicro approved card for that chassis), with 16 drives 6G SAS may remove a potential bottleneck to the expander. Also, consider 15K SAS for your high IO database and mailservers, a mix of 15K SAS and 7K SATA arrays might be appropriate. Anything but LSI cards often have issues with the LSI based expanders in those Supermicro chassis, Areca do work with the SAS1 expander as long as SAFTE is disabled but considering the expander I think LSI is the only advisable card brand. Any reason you aren''t considering 1U servers with integrated Intel NIcs for nodes? Often the best band per buck for nodes is with 1U Dual Xeon E55XX quadcore or the new Opteron Octal/Dodeca core systems. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 08 June 2010 15:38 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Rob, Do you have any links or anything for cards that you suggest? I''m just a start-up to low cost is very much a good thing here :) But then again, so is having my cake and eating it as well!! The RAID card what was came standard with this server that I was looking at: http://www.broadberry.co.uk/iscsi-san-nas-storage-servers/cyberstore-316s-wss That''s a fantastic idea about the PXE booting! The only thing though, is that Dell supply their server with a minimum of a single HDD as standard, so there would be no cost saving there. And also, all the servers would have to be the same. My idea is that if this was to work out properly, I would get servers better than R210, as these are limited to 16GB of RAM max.. Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Tue 08/06/2010 15:36 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, Might be worth considering a different raid card, even with simple raid 1 I did not get proper raid 1 random read interleaving performance with an LSI 1068 based controller (Assuming the 1078 is very similar), an IOP based Areca card behaved properly (Only 30% improvement over single drive with LSI but 80% better with Areca, simple Bonnie testing). I was using Centos 5.2 at the time (Integrated drivers). If you are feeling brave maybe a PXE boot could work to save the need for any system drives on the nodes. Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 08 June 2010 13:56 To: Xen-users@lists.xensource.com Subject: [Xen-users] My future plan My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated: Each Node: Dell R210 Intel X3430 Quad Core 8GB RAM Intel PT 1Gbps Server Dual Port NIC using linux "bonding" Small pair of HDDs for OS (Probably in RAID1) Each node will run about 10 - 15 customer guests Storage Server: Some Intel Quad Core Chip 2GB RAM (Maybe more?) LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) Battery backup for the above RAID controller 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) Each RAID10 array will connect to 2 nodes (8 nodes per storage server) Intel PT 1Gbps Quad port NIC using Linux bonding Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage server), and 8 X 2 port trunk (for the nodes) What you think? Any tips? Thanks The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Jonathan, I was going to say buy duals but then I saw the price of the R210s, tough call. Good example: http://www.supermicro.com/products/system/1U/6016/SYS-6016T-URF4_.cfm?UIO=N 18 Dimm slots and 4 of the latest Intel GBE Ports (Supports Multi-Queue used in Xen 4.0). Just add some Quad or Hex core Xeons and as much RAM as you need, no need for additional NICs. Depending on a internal policy onsite rapid response support maybe less of an issue when you have a redundant node type architecture. I have to admit the R210s are a good price though and its a tough choice: R210 with 2.40Ghz QC and 8GB Ram and Pro1000 ET Dual port - About £600 Supermicro Above with Dual 2.26Ghz QC and 24GB Ram - About £1700 The dual options gives you full screen IKVM and redundant PSUs along with with 12 spare memory slots (6 used) as opposed to no spare slots on the R210. Alot of this depends on your RAM requirements and spare rack space I suppose. Would be interesting to hear the opinions of others. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 08 June 2010 16:20 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Rob, Since this is just an idea at this stage, and that we are just starting out, we want to build up our rack over time. The Dell R210 is the best we can afford at the minute. Maybe, after the first 4 or 5 R210, I could look into getting servers with Dual CPUs in them so more guests can run. Initally, each server will be handling its own storage using RAID1. The Dell R210s do come with dual on-board NICs, however I need one of them for the internet connection, unless of course I used VLANs and just used the on-board NICs? I''m very confused about the RAID cards. I''ve never really worked with these before, so all advice is appreciated. With the number of total VMs running around 100, do you think I''ll notice much of a difference between SATA and SAS? Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Tue 08/06/2010 15:56 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, The NAS is using good components, make sure you get IPMI option if this is going in a rack more than 5 minutes away from where you work. Ask Broadberry if they can supply the newer SAS 6G expander version of that chassis and the newer 9260-4I 6G raid card (I''m pretty sure it a Supermicro approved card for that chassis), with 16 drives 6G SAS may remove a potential bottleneck to the expander. Also, consider 15K SAS for your high IO database and mailservers, a mix of 15K SAS and 7K SATA arrays might be appropriate. Anything but LSI cards often have issues with the LSI based expanders in those Supermicro chassis, Areca do work with the SAS1 expander as long as SAFTE is disabled but considering the expander I think LSI is the only advisable card brand. Any reason you aren''t considering 1U servers with integrated Intel NIcs for nodes? Often the best band per buck for nodes is with 1U Dual Xeon E55XX quadcore or the new Opteron Octal/Dodeca core systems. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 08 June 2010 15:38 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Rob, Do you have any links or anything for cards that you suggest? I''m just a start-up to low cost is very much a good thing here :) But then again, so is having my cake and eating it as well!! The RAID card what was came standard with this server that I was looking at: http://www.broadberry.co.uk/iscsi-san-nas-storage-servers/cyberstore-316s-wss That''s a fantastic idea about the PXE booting! The only thing though, is that Dell supply their server with a minimum of a single HDD as standard, so there would be no cost saving there. And also, all the servers would have to be the same. My idea is that if this was to work out properly, I would get servers better than R210, as these are limited to 16GB of RAM max.. Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Tue 08/06/2010 15:36 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, Might be worth considering a different raid card, even with simple raid 1 I did not get proper raid 1 random read interleaving performance with an LSI 1068 based controller (Assuming the 1078 is very similar), an IOP based Areca card behaved properly (Only 30% improvement over single drive with LSI but 80% better with Areca, simple Bonnie testing). I was using Centos 5.2 at the time (Integrated drivers). If you are feeling brave maybe a PXE boot could work to save the need for any system drives on the nodes. Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 08 June 2010 13:56 To: Xen-users@lists.xensource.com Subject: [Xen-users] My future plan My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated: Each Node: Dell R210 Intel X3430 Quad Core 8GB RAM Intel PT 1Gbps Server Dual Port NIC using linux "bonding" Small pair of HDDs for OS (Probably in RAID1) Each node will run about 10 - 15 customer guests Storage Server: Some Intel Quad Core Chip 2GB RAM (Maybe more?) LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) Battery backup for the above RAID controller 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) Each RAID10 array will connect to 2 nodes (8 nodes per storage server) Intel PT 1Gbps Quad port NIC using Linux bonding Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage server), and 8 X 2 port trunk (for the nodes) What you think? Any tips? Thanks The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Rob, Thanks for the link. I think very highly of supermicro gear as well as their staff. However, since we wish to build up the solution slowly, we can really only afford to start with the R210s. Once the initial 3 or 4 R210 general some revenue, then we could look into some beefier servers (As it would be much cheeper in the long run as we could run more guests per node). Please let me know if you think my plan is flawed from the outset. When you were spec''ing the R210, what NIC were you looking at? Just the 2 on board ones? Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Wed 09/06/2010 09:13 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, I was going to say buy duals but then I saw the price of the R210s, tough call. Good example: http://www.supermicro.com/products/system/1U/6016/SYS-6016T-URF4_.cfm?UIO=N 18 Dimm slots and 4 of the latest Intel GBE Ports (Supports Multi-Queue used in Xen 4.0). Just add some Quad or Hex core Xeons and as much RAM as you need, no need for additional NICs. Depending on a internal policy onsite rapid response support maybe less of an issue when you have a redundant node type architecture. I have to admit the R210s are a good price though and its a tough choice: R210 with 2.40Ghz QC and 8GB Ram and Pro1000 ET Dual port - About £600 Supermicro Above with Dual 2.26Ghz QC and 24GB Ram - About £1700 The dual options gives you full screen IKVM and redundant PSUs along with with 12 spare memory slots (6 used) as opposed to no spare slots on the R210. Alot of this depends on your RAM requirements and spare rack space I suppose. Would be interesting to hear the opinions of others. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 08 June 2010 16:20 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Rob, Since this is just an idea at this stage, and that we are just starting out, we want to build up our rack over time. The Dell R210 is the best we can afford at the minute. Maybe, after the first 4 or 5 R210, I could look into getting servers with Dual CPUs in them so more guests can run. Initally, each server will be handling its own storage using RAID1. The Dell R210s do come with dual on-board NICs, however I need one of them for the internet connection, unless of course I used VLANs and just used the on-board NICs? I''m very confused about the RAID cards. I''ve never really worked with these before, so all advice is appreciated. With the number of total VMs running around 100, do you think I''ll notice much of a difference between SATA and SAS? Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Tue 08/06/2010 15:56 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, The NAS is using good components, make sure you get IPMI option if this is going in a rack more than 5 minutes away from where you work. Ask Broadberry if they can supply the newer SAS 6G expander version of that chassis and the newer 9260-4I 6G raid card (I''m pretty sure it a Supermicro approved card for that chassis), with 16 drives 6G SAS may remove a potential bottleneck to the expander. Also, consider 15K SAS for your high IO database and mailservers, a mix of 15K SAS and 7K SATA arrays might be appropriate. Anything but LSI cards often have issues with the LSI based expanders in those Supermicro chassis, Areca do work with the SAS1 expander as long as SAFTE is disabled but considering the expander I think LSI is the only advisable card brand. Any reason you aren''t considering 1U servers with integrated Intel NIcs for nodes? Often the best band per buck for nodes is with 1U Dual Xeon E55XX quadcore or the new Opteron Octal/Dodeca core systems. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 08 June 2010 15:38 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Rob, Do you have any links or anything for cards that you suggest? I''m just a start-up to low cost is very much a good thing here :) But then again, so is having my cake and eating it as well!! The RAID card what was came standard with this server that I was looking at: http://www.broadberry.co.uk/iscsi-san-nas-storage-servers/cyberstore-316s-wss That''s a fantastic idea about the PXE booting! The only thing though, is that Dell supply their server with a minimum of a single HDD as standard, so there would be no cost saving there. And also, all the servers would have to be the same. My idea is that if this was to work out properly, I would get servers better than R210, as these are limited to 16GB of RAM max.. Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Tue 08/06/2010 15:36 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, Might be worth considering a different raid card, even with simple raid 1 I did not get proper raid 1 random read interleaving performance with an LSI 1068 based controller (Assuming the 1078 is very similar), an IOP based Areca card behaved properly (Only 30% improvement over single drive with LSI but 80% better with Areca, simple Bonnie testing). I was using Centos 5.2 at the time (Integrated drivers). If you are feeling brave maybe a PXE boot could work to save the need for any system drives on the nodes. Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 08 June 2010 13:56 To: Xen-users@lists.xensource.com Subject: [Xen-users] My future plan My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated: Each Node: Dell R210 Intel X3430 Quad Core 8GB RAM Intel PT 1Gbps Server Dual Port NIC using linux "bonding" Small pair of HDDs for OS (Probably in RAID1) Each node will run about 10 - 15 customer guests Storage Server: Some Intel Quad Core Chip 2GB RAM (Maybe more?) LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) Battery backup for the above RAID controller 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) Each RAID10 array will connect to 2 nodes (8 nodes per storage server) Intel PT 1Gbps Quad port NIC using Linux bonding Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage server), and 8 X 2 port trunk (for the nodes) What you think? Any tips? Thanks The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Thanks Rob for the tip on the NICs! This will come in handy. My main area of concern was using Ethernet/Software iSCSI for my setup, but all seems ok! I''ll remember to ask Broadberry about the new backplace and RAID card for the storage server. Do you think I''ll be alright using just SATA disks for my setup? I guess I could always change the disks if it became a problem... ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Wed 09/06/2010 09:36 To: Jonathan Tripathy Subject: RE: [Xen-users] My future plan Hi Jonathan, There is nothing wrong with your plan just make sure you get the SAS6G backplane and card for the storage, cost difference should be little or nothing and you don''t want to be bandwidth constrained later by the raid card if you choose to upgrade to 10Gbit for storage. I could not see the dual port Pro 1000ET copper card on Dells options so was just pricing those separately at about £100 each: http://www.google.co.uk/products/catalog?q=E1G42ET+Intel&cid=12126864948002960902&ei=b1EPTPKmFZ622ASGxYnUBA&sa=title&ved=0CAcQ8wIwADgA#p The ET ones are the latest with multi-queue support so are the ones to get IMHO. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 09 June 2010 09:20 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Rob, Thanks for the link. I think very highly of supermicro gear as well as their staff. However, since we wish to build up the solution slowly, we can really only afford to start with the R210s. Once the initial 3 or 4 R210 general some revenue, then we could look into some beefier servers (As it would be much cheeper in the long run as we could run more guests per node). Please let me know if you think my plan is flawed from the outset. When you were spec''ing the R210, what NIC were you looking at? Just the 2 on board ones? Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Wed 09/06/2010 09:13 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, I was going to say buy duals but then I saw the price of the R210s, tough call. Good example: http://www.supermicro.com/products/system/1U/6016/SYS-6016T-URF4_.cfm?UIO=N 18 Dimm slots and 4 of the latest Intel GBE Ports (Supports Multi-Queue used in Xen 4.0). Just add some Quad or Hex core Xeons and as much RAM as you need, no need for additional NICs. Depending on a internal policy onsite rapid response support maybe less of an issue when you have a redundant node type architecture. I have to admit the R210s are a good price though and its a tough choice: R210 with 2.40Ghz QC and 8GB Ram and Pro1000 ET Dual port - About £600 Supermicro Above with Dual 2.26Ghz QC and 24GB Ram - About £1700 The dual options gives you full screen IKVM and redundant PSUs along with with 12 spare memory slots (6 used) as opposed to no spare slots on the R210. Alot of this depends on your RAM requirements and spare rack space I suppose. Would be interesting to hear the opinions of others. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 08 June 2010 16:20 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Rob, Since this is just an idea at this stage, and that we are just starting out, we want to build up our rack over time. The Dell R210 is the best we can afford at the minute. Maybe, after the first 4 or 5 R210, I could look into getting servers with Dual CPUs in them so more guests can run. Initally, each server will be handling its own storage using RAID1. The Dell R210s do come with dual on-board NICs, however I need one of them for the internet connection, unless of course I used VLANs and just used the on-board NICs? I''m very confused about the RAID cards. I''ve never really worked with these before, so all advice is appreciated. With the number of total VMs running around 100, do you think I''ll notice much of a difference between SATA and SAS? Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Tue 08/06/2010 15:56 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, The NAS is using good components, make sure you get IPMI option if this is going in a rack more than 5 minutes away from where you work. Ask Broadberry if they can supply the newer SAS 6G expander version of that chassis and the newer 9260-4I 6G raid card (I''m pretty sure it a Supermicro approved card for that chassis), with 16 drives 6G SAS may remove a potential bottleneck to the expander. Also, consider 15K SAS for your high IO database and mailservers, a mix of 15K SAS and 7K SATA arrays might be appropriate. Anything but LSI cards often have issues with the LSI based expanders in those Supermicro chassis, Areca do work with the SAS1 expander as long as SAFTE is disabled but considering the expander I think LSI is the only advisable card brand. Any reason you aren''t considering 1U servers with integrated Intel NIcs for nodes? Often the best band per buck for nodes is with 1U Dual Xeon E55XX quadcore or the new Opteron Octal/Dodeca core systems. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 08 June 2010 15:38 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Rob, Do you have any links or anything for cards that you suggest? I''m just a start-up to low cost is very much a good thing here :) But then again, so is having my cake and eating it as well!! The RAID card what was came standard with this server that I was looking at: http://www.broadberry.co.uk/iscsi-san-nas-storage-servers/cyberstore-316s-wss That''s a fantastic idea about the PXE booting! The only thing though, is that Dell supply their server with a minimum of a single HDD as standard, so there would be no cost saving there. And also, all the servers would have to be the same. My idea is that if this was to work out properly, I would get servers better than R210, as these are limited to 16GB of RAM max.. Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Tue 08/06/2010 15:36 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, Might be worth considering a different raid card, even with simple raid 1 I did not get proper raid 1 random read interleaving performance with an LSI 1068 based controller (Assuming the 1078 is very similar), an IOP based Areca card behaved properly (Only 30% improvement over single drive with LSI but 80% better with Areca, simple Bonnie testing). I was using Centos 5.2 at the time (Integrated drivers). If you are feeling brave maybe a PXE boot could work to save the need for any system drives on the nodes. Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 08 June 2010 13:56 To: Xen-users@lists.xensource.com Subject: [Xen-users] My future plan My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated: Each Node: Dell R210 Intel X3430 Quad Core 8GB RAM Intel PT 1Gbps Server Dual Port NIC using linux "bonding" Small pair of HDDs for OS (Probably in RAID1) Each node will run about 10 - 15 customer guests Storage Server: Some Intel Quad Core Chip 2GB RAM (Maybe more?) LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) Battery backup for the above RAID controller 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) Each RAID10 array will connect to 2 nodes (8 nodes per storage server) Intel PT 1Gbps Quad port NIC using Linux bonding Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage server), and 8 X 2 port trunk (for the nodes) What you think? Any tips? Thanks The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Jonathan, One other thing, check with the ISCSI guys as to if using 2 dual port Intel ET cards might be better than a single quad port. Intel quad ports use a PCI-E bridge to join 2 dual port chips so it might be faster and definitely cheaper to use two dual port cards. In my experience storage speed is very much a try it and see how it goes type thing for smaller setups. After 2 years using Xen we run a mix of 15K SAS and consumer SATA. Generally small fast SAS drives are best for DB and Public/Pop mail servers. Web, Exchange/IMAP and support servers (eg. Radius and DNS) typically favour space over speed and hence better suit large SATA drives. I would really consider a second storage server when you can, lots of eggs in one basket although I know these storage baskets are quite pricey. Join them by 20Gbit direct connect infiniband and run DRBD with SDP, it make for very fast replication and you can still balance the reads using ISCSI Active/Passive multipath between the two. I was considering this a week ago, we currently run sets of Quad socket server joined by infiniband for DRBD replication. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 09 June 2010 09:39 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Thanks Rob for the tip on the NICs! This will come in handy. My main area of concern was using Ethernet/Software iSCSI for my setup, but all seems ok! I''ll remember to ask Broadberry about the new backplace and RAID card for the storage server. Do you think I''ll be alright using just SATA disks for my setup? I guess I could always change the disks if it became a problem... ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Wed 09/06/2010 09:36 To: Jonathan Tripathy Subject: RE: [Xen-users] My future plan Hi Jonathan, There is nothing wrong with your plan just make sure you get the SAS6G backplane and card for the storage, cost difference should be little or nothing and you don''t want to be bandwidth constrained later by the raid card if you choose to upgrade to 10Gbit for storage. I could not see the dual port Pro 1000ET copper card on Dells options so was just pricing those separately at about £100 each: http://www.google.co.uk/products/catalog?q=E1G42ET+Intel&cid=12126864948002960902&ei=b1EPTPKmFZ622ASGxYnUBA&sa=title&ved=0CAcQ8wIwADgA#p The ET ones are the latest with multi-queue support so are the ones to get IMHO. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 09 June 2010 09:20 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Rob, Thanks for the link. I think very highly of supermicro gear as well as their staff. However, since we wish to build up the solution slowly, we can really only afford to start with the R210s. Once the initial 3 or 4 R210 general some revenue, then we could look into some beefier servers (As it would be much cheeper in the long run as we could run more guests per node). Please let me know if you think my plan is flawed from the outset. When you were spec''ing the R210, what NIC were you looking at? Just the 2 on board ones? Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Wed 09/06/2010 09:13 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, I was going to say buy duals but then I saw the price of the R210s, tough call. Good example: http://www.supermicro.com/products/system/1U/6016/SYS-6016T-URF4_.cfm?UIO=N 18 Dimm slots and 4 of the latest Intel GBE Ports (Supports Multi-Queue used in Xen 4.0). Just add some Quad or Hex core Xeons and as much RAM as you need, no need for additional NICs. Depending on a internal policy onsite rapid response support maybe less of an issue when you have a redundant node type architecture. I have to admit the R210s are a good price though and its a tough choice: R210 with 2.40Ghz QC and 8GB Ram and Pro1000 ET Dual port - About £600 Supermicro Above with Dual 2.26Ghz QC and 24GB Ram - About £1700 The dual options gives you full screen IKVM and redundant PSUs along with with 12 spare memory slots (6 used) as opposed to no spare slots on the R210. Alot of this depends on your RAM requirements and spare rack space I suppose. Would be interesting to hear the opinions of others. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 08 June 2010 16:20 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Rob, Since this is just an idea at this stage, and that we are just starting out, we want to build up our rack over time. The Dell R210 is the best we can afford at the minute. Maybe, after the first 4 or 5 R210, I could look into getting servers with Dual CPUs in them so more guests can run. Initally, each server will be handling its own storage using RAID1. The Dell R210s do come with dual on-board NICs, however I need one of them for the internet connection, unless of course I used VLANs and just used the on-board NICs? I''m very confused about the RAID cards. I''ve never really worked with these before, so all advice is appreciated. With the number of total VMs running around 100, do you think I''ll notice much of a difference between SATA and SAS? Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Tue 08/06/2010 15:56 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, The NAS is using good components, make sure you get IPMI option if this is going in a rack more than 5 minutes away from where you work. Ask Broadberry if they can supply the newer SAS 6G expander version of that chassis and the newer 9260-4I 6G raid card (I''m pretty sure it a Supermicro approved card for that chassis), with 16 drives 6G SAS may remove a potential bottleneck to the expander. Also, consider 15K SAS for your high IO database and mailservers, a mix of 15K SAS and 7K SATA arrays might be appropriate. Anything but LSI cards often have issues with the LSI based expanders in those Supermicro chassis, Areca do work with the SAS1 expander as long as SAFTE is disabled but considering the expander I think LSI is the only advisable card brand. Any reason you aren''t considering 1U servers with integrated Intel NIcs for nodes? Often the best band per buck for nodes is with 1U Dual Xeon E55XX quadcore or the new Opteron Octal/Dodeca core systems. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 08 June 2010 15:38 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Rob, Do you have any links or anything for cards that you suggest? I''m just a start-up to low cost is very much a good thing here :) But then again, so is having my cake and eating it as well!! The RAID card what was came standard with this server that I was looking at: http://www.broadberry.co.uk/iscsi-san-nas-storage-servers/cyberstore-316s-wss That''s a fantastic idea about the PXE booting! The only thing though, is that Dell supply their server with a minimum of a single HDD as standard, so there would be no cost saving there. And also, all the servers would have to be the same. My idea is that if this was to work out properly, I would get servers better than R210, as these are limited to 16GB of RAM max.. Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Tue 08/06/2010 15:36 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, Might be worth considering a different raid card, even with simple raid 1 I did not get proper raid 1 random read interleaving performance with an LSI 1068 based controller (Assuming the 1078 is very similar), an IOP based Areca card behaved properly (Only 30% improvement over single drive with LSI but 80% better with Areca, simple Bonnie testing). I was using Centos 5.2 at the time (Integrated drivers). If you are feeling brave maybe a PXE boot could work to save the need for any system drives on the nodes. Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 08 June 2010 13:56 To: Xen-users@lists.xensource.com Subject: [Xen-users] My future plan My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated: Each Node: Dell R210 Intel X3430 Quad Core 8GB RAM Intel PT 1Gbps Server Dual Port NIC using linux "bonding" Small pair of HDDs for OS (Probably in RAID1) Each node will run about 10 - 15 customer guests Storage Server: Some Intel Quad Core Chip 2GB RAM (Maybe more?) LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) Battery backup for the above RAID controller 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) Each RAID10 array will connect to 2 nodes (8 nodes per storage server) Intel PT 1Gbps Quad port NIC using Linux bonding Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage server), and 8 X 2 port trunk (for the nodes) What you think? Any tips? Thanks The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Rob, I completly understand what you are saying. My future future plan (yes, the future of the future!) is to get more storage servers, and do replication. I would also like to think about HA for the nodes, as in my current plan, I would have to manually bring up a new node and connect it to the dead node''s iSCSI target.. Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Wed 09/06/2010 09:52 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, One other thing, check with the ISCSI guys as to if using 2 dual port Intel ET cards might be better than a single quad port. Intel quad ports use a PCI-E bridge to join 2 dual port chips so it might be faster and definitely cheaper to use two dual port cards. In my experience storage speed is very much a try it and see how it goes type thing for smaller setups. After 2 years using Xen we run a mix of 15K SAS and consumer SATA. Generally small fast SAS drives are best for DB and Public/Pop mail servers. Web, Exchange/IMAP and support servers (eg. Radius and DNS) typically favour space over speed and hence better suit large SATA drives. I would really consider a second storage server when you can, lots of eggs in one basket although I know these storage baskets are quite pricey. Join them by 20Gbit direct connect infiniband and run DRBD with SDP, it make for very fast replication and you can still balance the reads using ISCSI Active/Passive multipath between the two. I was considering this a week ago, we currently run sets of Quad socket server joined by infiniband for DRBD replication. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 09 June 2010 09:39 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Thanks Rob for the tip on the NICs! This will come in handy. My main area of concern was using Ethernet/Software iSCSI for my setup, but all seems ok! I''ll remember to ask Broadberry about the new backplace and RAID card for the storage server. Do you think I''ll be alright using just SATA disks for my setup? I guess I could always change the disks if it became a problem... ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Wed 09/06/2010 09:36 To: Jonathan Tripathy Subject: RE: [Xen-users] My future plan Hi Jonathan, There is nothing wrong with your plan just make sure you get the SAS6G backplane and card for the storage, cost difference should be little or nothing and you don''t want to be bandwidth constrained later by the raid card if you choose to upgrade to 10Gbit for storage. I could not see the dual port Pro 1000ET copper card on Dells options so was just pricing those separately at about £100 each: http://www.google.co.uk/products/catalog?q=E1G42ET+Intel&cid=12126864948002960902&ei=b1EPTPKmFZ622ASGxYnUBA&sa=title&ved=0CAcQ8wIwADgA#p The ET ones are the latest with multi-queue support so are the ones to get IMHO. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 09 June 2010 09:20 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Rob, Thanks for the link. I think very highly of supermicro gear as well as their staff. However, since we wish to build up the solution slowly, we can really only afford to start with the R210s. Once the initial 3 or 4 R210 general some revenue, then we could look into some beefier servers (As it would be much cheeper in the long run as we could run more guests per node). Please let me know if you think my plan is flawed from the outset. When you were spec''ing the R210, what NIC were you looking at? Just the 2 on board ones? Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Wed 09/06/2010 09:13 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, I was going to say buy duals but then I saw the price of the R210s, tough call. Good example: http://www.supermicro.com/products/system/1U/6016/SYS-6016T-URF4_.cfm?UIO=N 18 Dimm slots and 4 of the latest Intel GBE Ports (Supports Multi-Queue used in Xen 4.0). Just add some Quad or Hex core Xeons and as much RAM as you need, no need for additional NICs. Depending on a internal policy onsite rapid response support maybe less of an issue when you have a redundant node type architecture. I have to admit the R210s are a good price though and its a tough choice: R210 with 2.40Ghz QC and 8GB Ram and Pro1000 ET Dual port - About £600 Supermicro Above with Dual 2.26Ghz QC and 24GB Ram - About £1700 The dual options gives you full screen IKVM and redundant PSUs along with with 12 spare memory slots (6 used) as opposed to no spare slots on the R210. Alot of this depends on your RAM requirements and spare rack space I suppose. Would be interesting to hear the opinions of others. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 08 June 2010 16:20 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Rob, Since this is just an idea at this stage, and that we are just starting out, we want to build up our rack over time. The Dell R210 is the best we can afford at the minute. Maybe, after the first 4 or 5 R210, I could look into getting servers with Dual CPUs in them so more guests can run. Initally, each server will be handling its own storage using RAID1. The Dell R210s do come with dual on-board NICs, however I need one of them for the internet connection, unless of course I used VLANs and just used the on-board NICs? I''m very confused about the RAID cards. I''ve never really worked with these before, so all advice is appreciated. With the number of total VMs running around 100, do you think I''ll notice much of a difference between SATA and SAS? Thanks ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Tue 08/06/2010 15:56 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, The NAS is using good components, make sure you get IPMI option if this is going in a rack more than 5 minutes away from where you work. Ask Broadberry if they can supply the newer SAS 6G expander version of that chassis and the newer 9260-4I 6G raid card (I''m pretty sure it a Supermicro approved card for that chassis), with 16 drives 6G SAS may remove a potential bottleneck to the expander. Also, consider 15K SAS for your high IO database and mailservers, a mix of 15K SAS and 7K SATA arrays might be appropriate. Anything but LSI cards often have issues with the LSI based expanders in those Supermicro chassis, Areca do work with the SAS1 expander as long as SAFTE is disabled but considering the expander I think LSI is the only advisable card brand. Any reason you aren''t considering 1U servers with integrated Intel NIcs for nodes? Often the best band per buck for nodes is with 1U Dual Xeon E55XX quadcore or the new Opteron Octal/Dodeca core systems. Rob From: Jonathan Tripathy [mailto:jonnyt@abpni.co.uk] Sent: 08 June 2010 15:38 To: Robert Dunkley; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Rob, Do you have any links or anything for cards that you suggest? I''m just a start-up to low cost is very much a good thing here :) But then again, so is having my cake and eating it as well!! The RAID card what was came standard with this server that I was looking at: http://www.broadberry.co.uk/iscsi-san-nas-storage-servers/cyberstore-316s-wss That''s a fantastic idea about the PXE booting! The only thing though, is that Dell supply their server with a minimum of a single HDD as standard, so there would be no cost saving there. And also, all the servers would have to be the same. My idea is that if this was to work out properly, I would get servers better than R210, as these are limited to 16GB of RAM max.. Thanks Jonathan ________________________________ From: Robert Dunkley [mailto:Robert@saq.co.uk] Sent: Tue 08/06/2010 15:36 To: Jonathan Tripathy; Xen-users@lists.xensource.com Subject: RE: [Xen-users] My future plan Hi Jonathan, Might be worth considering a different raid card, even with simple raid 1 I did not get proper raid 1 random read interleaving performance with an LSI 1068 based controller (Assuming the 1078 is very similar), an IOP based Areca card behaved properly (Only 30% improvement over single drive with LSI but 80% better with Areca, simple Bonnie testing). I was using Centos 5.2 at the time (Integrated drivers). If you are feeling brave maybe a PXE boot could work to save the need for any system drives on the nodes. Rob From: xen-users-bounces@lists.xensource.com [mailto:xen-users-bounces@lists.xensource.com] On Behalf Of Jonathan Tripathy Sent: 08 June 2010 13:56 To: Xen-users@lists.xensource.com Subject: [Xen-users] My future plan My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated: Each Node: Dell R210 Intel X3430 Quad Core 8GB RAM Intel PT 1Gbps Server Dual Port NIC using linux "bonding" Small pair of HDDs for OS (Probably in RAID1) Each node will run about 10 - 15 customer guests Storage Server: Some Intel Quad Core Chip 2GB RAM (Maybe more?) LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) Battery backup for the above RAID controller 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) Each RAID10 array will connect to 2 nodes (8 nodes per storage server) Intel PT 1Gbps Quad port NIC using Linux bonding Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage server), and 8 X 2 port trunk (for the nodes) What you think? Any tips? Thanks The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member The SAQ Group Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ SAQ is the trading name of SEMTEC Limited. Registered in England & Wales Company Number: 06481952 http://www.saqnet.co.uk <http://www.saqnet.co.uk/> AS29219 SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business. Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support. SAQ Group<http://www.saq.co.uk/office/saqlogo.gif> ISPA Member _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On the DRBD mailing lists I''ve seen a couple of times that they did tests with bonding and they claim that a bond with more than 2 NICs will actually decrease performance because of the TCP reordering that needs to be done. That''s the reason why I limit the storage connection to two NICs. I have a very similar to yours in the making by the way. On Tuesday 08 June 2010 15:55:47 Jonathan Tripathy wrote:> Hi Michael, > > Thanks for the tips using SSD for the node OS drives. > > Regarding the NIC, I was thinking about using this for the nodes: > > http://www.intel.com/products/server/adapters/pro1000pt-dualport/pro1000pt- > dualport-overview.htm > > and this for the server: > > http://www.intel.com/products/server/adapters/pro1000pt-quadport-low-profil > e/pro1000pt-quadport-low-profile-overview.htm > > Are those the cards you were talking about? They are very cheap on ebay you > see... > > Think 4 port bonding for the server is good enough for 8 nodes? > > Thanks > > ________________________________ > > From: Michael Schmidt [mailto:michael.schmidt@xncore.com] > Sent: Tue 08/06/2010 14:49 > To: Jonathan Tripathy; Xen-users@lists.xensource.com > Subject: Re: [Xen-users] My future plan > > > Hi Jonathan, > > you should think about flash or SD cards as xen-boot-drive. > This provides you lower costs and higher energy efficiency. > If you mount /tmp and /var/log to an tmpfs, this disks works very well and > long. > > If you dont need so much disk space for your storage, use sas disks. > SAS (10k/15k) disks provides you many more IOPs than sata disks (more IOPS > per $/EUR as well). And very important: A very large cache for your raid > controller. > > Intel e1000e is a pretty good choice. This cards have a large buffer and > generates just a few interrupts on your CPUs (in comparison to the > Broadcom NICs). > > Best Regards > > Michael Schmidt > Am 08.06.10 14:55, schrieb Jonathan Tripathy: > > My future plan currently looks like this for my VPS hosting solution, so > any feedback would be appreciated: > > Each Node: > Dell R210 Intel X3430 Quad Core 8GB RAM > Intel PT 1Gbps Server Dual Port NIC using linux "bonding" > Small pair of HDDs for OS (Probably in RAID1) > Each node will run about 10 - 15 customer guests > > > Storage Server: > Some Intel Quad Core Chip > 2GB RAM (Maybe more?) > LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) > Battery backup for the above RAID controller > 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) > Each RAID10 array will connect to 2 nodes (8 nodes per storage server) > Intel PT 1Gbps Quad port NIC using Linux bonding > Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) > > HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage > server), and 8 X 2 port trunk (for the nodes) > > What you think? Any tips? > > Thanks > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
So should I just go with 2 NICs for the storage server then? In your future setup, how many NICs are you using for the storage server and how many for the nodes? I take it you''re using software iSCSI? ________________________________ From: Bart Coninckx [mailto:bart.coninckx@telenet.be] Sent: Wed 09/06/2010 11:25 To: xen-users@lists.xensource.com Cc: Jonathan Tripathy; Michael Schmidt Subject: Re: [Xen-users] My future plan On the DRBD mailing lists I''ve seen a couple of times that they did tests with bonding and they claim that a bond with more than 2 NICs will actually decrease performance because of the TCP reordering that needs to be done. That''s the reason why I limit the storage connection to two NICs. I have a very similar to yours in the making by the way. On Tuesday 08 June 2010 15:55:47 Jonathan Tripathy wrote:> Hi Michael, > > Thanks for the tips using SSD for the node OS drives. > > Regarding the NIC, I was thinking about using this for the nodes: > > http://www.intel.com/products/server/adapters/pro1000pt-dualport/pro1000pt- > dualport-overview.htm > > and this for the server: > > http://www.intel.com/products/server/adapters/pro1000pt-quadport-low-profil > e/pro1000pt-quadport-low-profile-overview.htm > > Are those the cards you were talking about? They are very cheap on ebay you > see... > > Think 4 port bonding for the server is good enough for 8 nodes? > > Thanks > > ________________________________ > > From: Michael Schmidt [mailto:michael.schmidt@xncore.com] > Sent: Tue 08/06/2010 14:49 > To: Jonathan Tripathy; Xen-users@lists.xensource.com > Subject: Re: [Xen-users] My future plan > > > Hi Jonathan, > > you should think about flash or SD cards as xen-boot-drive. > This provides you lower costs and higher energy efficiency. > If you mount /tmp and /var/log to an tmpfs, this disks works very well and > long. > > If you dont need so much disk space for your storage, use sas disks. > SAS (10k/15k) disks provides you many more IOPs than sata disks (more IOPS > per $/EUR as well). And very important: A very large cache for your raid > controller. > > Intel e1000e is a pretty good choice. This cards have a large buffer and > generates just a few interrupts on your CPUs (in comparison to the > Broadcom NICs). > > Best Regards > > Michael Schmidt > Am 08.06.10 14:55, schrieb Jonathan Tripathy: > > My future plan currently looks like this for my VPS hosting solution, so > any feedback would be appreciated: > > Each Node: > Dell R210 Intel X3430 Quad Core 8GB RAM > Intel PT 1Gbps Server Dual Port NIC using linux "bonding" > Small pair of HDDs for OS (Probably in RAID1) > Each node will run about 10 - 15 customer guests > > > Storage Server: > Some Intel Quad Core Chip > 2GB RAM (Maybe more?) > LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) > Battery backup for the above RAID controller > 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) > Each RAID10 array will connect to 2 nodes (8 nodes per storage server) > Intel PT 1Gbps Quad port NIC using Linux bonding > Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) > > HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage > server), and 8 X 2 port trunk (for the nodes) > > What you think? Any tips? > > Thanks > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Jonathan, I use a DRBD bases IET install. It syncs between the nodes with two bonded Intel e1000 NICs. I use the same network cards to connect to the Xen hypervisors. MIND YOU: I use dual port NICs (two in total on the storga servers) but I CROSS the connections: that is, I connect one port of one card to the Xen nodes, but I use the other for the DRBD sync; And the other way around of course. This way, if a card breaks, I still have things running. To be able to use two switches in between Xen hosts and the storage, I use multipathing to connect to the iSCSI LUNs. This results in higher speed and network redundancy. It would make no sense to use more than 2 ports since DRBD cannot sync faster, but also, as mentioned before, it seems that bonding more than 2 does not result in higher speeds. This however is easily tested with netperf . I would be happy to hear someones testresults about this. O yes, if you don''t get the expected speeds with bonded cards in mode 0, try looking at tcp_reordering in /proc/sys/net/ipv4 something ... On Wednesday 09 June 2010 14:53:28 Jonathan Tripathy wrote:> So should I just go with 2 NICs for the storage server then? > > In your future setup, how many NICs are you using for the storage server > and how many for the nodes? I take it you''re using software iSCSI? > > ________________________________ > > From: Bart Coninckx [mailto:bart.coninckx@telenet.be] > Sent: Wed 09/06/2010 11:25 > To: xen-users@lists.xensource.com > Cc: Jonathan Tripathy; Michael Schmidt > Subject: Re: [Xen-users] My future plan > > > > On the DRBD mailing lists I''ve seen a couple of times that they did tests > with bonding and they claim that a bond with more than 2 NICs will > actually decrease performance because of the TCP reordering that needs to > be done. > > That''s the reason why I limit the storage connection to two NICs. I have a > very similar to yours in the making by the way. > > On Tuesday 08 June 2010 15:55:47 Jonathan Tripathy wrote: > > Hi Michael, > > > > Thanks for the tips using SSD for the node OS drives. > > > > Regarding the NIC, I was thinking about using this for the nodes: > > > > http://www.intel.com/products/server/adapters/pro1000pt-dualport/pro1000p > >t- dualport-overview.htm > > > > and this for the server: > > > > http://www.intel.com/products/server/adapters/pro1000pt-quadport-low-prof > >il e/pro1000pt-quadport-low-profile-overview.htm > > > > Are those the cards you were talking about? They are very cheap on ebay > > you see... > > > > Think 4 port bonding for the server is good enough for 8 nodes? > > > > Thanks > > > > ________________________________ > > > > From: Michael Schmidt [mailto:michael.schmidt@xncore.com] > > Sent: Tue 08/06/2010 14:49 > > To: Jonathan Tripathy; Xen-users@lists.xensource.com > > Subject: Re: [Xen-users] My future plan > > > > > > Hi Jonathan, > > > > you should think about flash or SD cards as xen-boot-drive. > > This provides you lower costs and higher energy efficiency. > > If you mount /tmp and /var/log to an tmpfs, this disks works very well > > and long. > > > > If you dont need so much disk space for your storage, use sas disks. > > SAS (10k/15k) disks provides you many more IOPs than sata disks (more > > IOPS per $/EUR as well). And very important: A very large cache for your > > raid controller. > > > > Intel e1000e is a pretty good choice. This cards have a large buffer and > > generates just a few interrupts on your CPUs (in comparison to the > > Broadcom NICs). > > > > Best Regards > > > > Michael Schmidt > > Am 08.06.10 14:55, schrieb Jonathan Tripathy: > > > > My future plan currently looks like this for my VPS hosting > > solution, so any feedback would be appreciated: > > > > Each Node: > > Dell R210 Intel X3430 Quad Core 8GB RAM > > Intel PT 1Gbps Server Dual Port NIC using linux "bonding" > > Small pair of HDDs for OS (Probably in RAID1) > > Each node will run about 10 - 15 customer guests > > > > > > Storage Server: > > Some Intel Quad Core Chip > > 2GB RAM (Maybe more?) > > LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) > > Battery backup for the above RAID controller > > 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) > > Each RAID10 array will connect to 2 nodes (8 nodes per storage > > server) Intel PT 1Gbps Quad port NIC using Linux bonding > > Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) > > > > HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage > > server), and 8 X 2 port trunk (for the nodes) > > > > What you think? Any tips? > > > > Thanks > > > > > > _______________________________________________ > > Xen-users mailing list > > Xen-users@lists.xensource.com > > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
----- Original message -----> Hi Jonathan, > > I use a DRBD bases IET install. It syncs between the nodes with two bonded > Intel e1000 NICs. I use the same network cards to connect to the Xen > hypervisors.Correction, the same kind of cards> MIND YOU: I use dual port NICs (two in total on the storga servers) but I > CROSS the connections: that is, I connect one port of one card to the Xen > nodes, but I use the other for the DRBD sync; And the other way around of > course. This way, if a card breaks, I still have things running. To be able to > use two switches in between Xen hosts and the storage, I use multipathing to > connect to the iSCSI LUNs. This results in higher speed and network > redundancy. It would make no sense to use more than 2 ports since DRBD cannot > sync faster, but also, as mentioned before, it seems that bonding more than 2 > does not result in higher speeds. > This however is easily tested with netperf . I would be happy to hear someones > testresults about this. > > O yes, if you don''t get the expected speeds with bonded cards in mode 0, try > looking at tcp_reordering in /proc/sys/net/ipv4 something ... > > > On Wednesday 09 June 2010 14:53:28 Jonathan Tripathy wrote: > > So should I just go with 2 NICs for the storage server then? > > > > In your future setup, how many NICs are you using for the storage server > > and how many for the nodes? I take it you''re using software iSCSI? > > > > ________________________________ > > > > From: Bart Coninckx [mailto:bart.coninckx@telenet.be] > > Sent: Wed 09/06/2010 11:25 > > To: xen-users@lists.xensource.com > > Cc: Jonathan Tripathy; Michael Schmidt > > Subject: Re: [Xen-users] My future plan > > > > > > > > On the DRBD mailing lists I''ve seen a couple of times that they did tests > > with bonding and they claim that a bond with more than 2 NICs will > > actually decrease performance because of the TCP reordering that needs to > > be done. > > > > That''s the reason why I limit the storage connection to two NICs. I have a > > very similar to yours in the making by the way. > > > > On Tuesday 08 June 2010 15:55:47 Jonathan Tripathy wrote: > > > Hi Michael, > > > > > > Thanks for the tips using SSD for the node OS drives. > > > > > > Regarding the NIC, I was thinking about using this for the nodes: > > > > > > http://www.intel.com/products/server/adapters/pro1000pt-dualport/pro1000p > > > t- dualport-overview.htm > > > > > > and this for the server: > > > > > > http://www.intel.com/products/server/adapters/pro1000pt-quadport-low-prof > > > il e/pro1000pt-quadport-low-profile-overview.htm > > > > > > Are those the cards you were talking about? They are very cheap on ebay > > > you see... > > > > > > Think 4 port bonding for the server is good enough for 8 nodes? > > > > > > Thanks > > > > > > ________________________________ > > > > > > From: Michael Schmidt [mailto:michael.schmidt@xncore.com] > > > Sent: Tue 08/06/2010 14:49 > > > To: Jonathan Tripathy; Xen-users@lists.xensource.com > > > Subject: Re: [Xen-users] My future plan > > > > > > > > > Hi Jonathan, > > > > > > you should think about flash or SD cards as xen-boot-drive. > > > This provides you lower costs and higher energy efficiency. > > > If you mount /tmp and /var/log to an tmpfs, this disks works very well > > > and long. > > > > > > If you dont need so much disk space for your storage, use sas disks. > > > SAS (10k/15k) disks provides you many more IOPs than sata disks (more > > > IOPS per $/EUR as well). And very important: A very large cache for your > > > raid controller. > > > > > > Intel e1000e is a pretty good choice. This cards have a large buffer and > > > generates just a few interrupts on your CPUs (in comparison to the > > > Broadcom NICs). > > > > > > Best Regards > > > > > > Michael Schmidt > > > Am 08.06.10 14:55, schrieb Jonathan Tripathy: > > > > > > My future plan currently looks like this for my VPS hosting > > > solution, so any feedback would be appreciated: > > > > > > Each Node: > > > Dell R210 Intel X3430 Quad Core 8GB RAM > > > Intel PT 1Gbps Server Dual Port NIC using linux "bonding" > > > Small pair of HDDs for OS (Probably in RAID1) > > > Each node will run about 10 - 15 customer guests > > > > > > > > > Storage Server: > > > Some Intel Quad Core Chip > > > 2GB RAM (Maybe more?) > > > LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps) > > > Battery backup for the above RAID controller > > > 4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total) > > > Each RAID10 array will connect to 2 nodes (8 nodes per storage > > > server) Intel PT 1Gbps Quad port NIC using Linux bonding > > > Exposes 8 X 1.5GB iSCSI targets (each node will use one of these) > > > > > > HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage > > > server), and 8 X 2 port trunk (for the nodes) > > > > > > What you think? Any tips? > > > > > > Thanks > > > > > > > > > _______________________________________________ > > > Xen-users mailing list > > > Xen-users@lists.xensource.com > > > http://lists.xensource.com/xen-users > > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users