I have a Dell PowerEdge 2900 with X5355 processors. I want to test the clustering features of XCP, but I don''t have another server handy. I am considering building a small machine in a desktop form factor that will be compatible, but that I can then re-use as a workstation later. I noticed this server board was on the Citrix HCL for XenServer 5.6 FP2: Intel Server Board S1200BTS It will accept Xeon processors, but not the X5355 (not that those are even available anymore). I also didn''t see the X5355 on the Heterogeneous CPU pool cross-reference at all. Does anyone have any experience with creating the right CPU masks to make two different CPUs work, and is there a model you recommend, versus models you don''t recommend? My goal is an inexpensive test, so we can continue our transition from VMware ESX, to XCP. If necessary, we might just build two small machines and use those for the test. Can anyone recommend some inexpensive workstation class hardware, that will work to test out a pooled configuration? Thanks, Brett Westover _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> I have a Dell PowerEdge 2900 with X5355 processors. > > I want to test the clustering features of XCP, but I don''t haveanother> server handy. I am considering building a small machine in a desktop > form factor that will be compatible, but that I can then re-use as a > workstation later. > > I noticed this server board was on the Citrix HCL for XenServer 5.6 > FP2: > Intel Server Board S1200BTS > > It will accept Xeon processors, but not the X5355 (not that those are > even available anymore). I also didn''t see the X5355 on the > Heterogeneous CPU pool cross-reference at all. > > Does anyone have any experience with creating the right CPU masks to > make two different CPUs work, and is there a model you recommend, > versus > models you don''t recommend? > > My goal is an inexpensive test, so we can continue our transition from > VMware ESX, to XCP. If necessary, we might just build two small > machines > and use those for the test. Can anyone recommend some inexpensive > workstation class hardware, that will work to test out a pooled > configuration? > > Thanks, > > Brett Westover >Does anyone have any recommendations for test hardware? We are looking to buy something so that we can build some knowledge and confidence in XCP as a suitable and more flexible replacement for ESX. Ideally we''d just buy/build one machine that could be made compatible with the one test server we have, but if it was easier and not terribly expensive we''d just buy two identical machines. Am I overcomplicating this? Thanks, Brett _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Oct 26, 2011 at 9:57 AM, Brett Westover <bwestover@pletter.com>wrote:> > I have a Dell PowerEdge 2900 with X5355 processors. > > > > I want to test the clustering features of XCP, but I don''t have > another > > server handy. I am considering building a small machine in a desktop > > form factor that will be compatible, but that I can then re-use as a > > workstation later. > > > > I noticed this server board was on the Citrix HCL for XenServer 5.6 > > FP2: > > Intel Server Board S1200BTS > > > > It will accept Xeon processors, but not the X5355 (not that those are > > even available anymore). I also didn''t see the X5355 on the > > Heterogeneous CPU pool cross-reference at all. > > > > Does anyone have any experience with creating the right CPU masks to > > make two different CPUs work, and is there a model you recommend, > > versus > > models you don''t recommend? > > > > My goal is an inexpensive test, so we can continue our transition from > > VMware ESX, to XCP. If necessary, we might just build two small > > machines > > and use those for the test. Can anyone recommend some inexpensive > > workstation class hardware, that will work to test out a pooled > > configuration? > > > > Thanks, > > > > Brett Westover > > > > Does anyone have any recommendations for test hardware? > > We are looking to buy something so that we can build some knowledge and > confidence in XCP as a suitable and more flexible replacement for ESX. > Ideally we''d just buy/build one machine that could be made compatible > with the one test server we have, but if it was easier and not terribly > expensive we''d just buy two identical machines. > > Am I overcomplicating this? > > Thanks, Brett > >I''m sure there will be various opinions on this but after going down the "really expensive but rarely upgradable" path I just build commodity servers for XCP now. Each host has a 2u case, hexicore cpu, 16 GB of ram, a local drive that''s currently used for nothing much and two network interfaces. With each I buy a 240 GB SSD that goes in the SAN. So for $1000 I can drop one machine into the rack, plug in power, both network cables and slide the SSD into the SAN box and I''ve just expanded capacity by 30 VMs. Each VM gets 512 MB of ram and 7 GB of storage space. Any other storage can be pulled from larger disk based SAN shares via iSCSI or NFS. This allows me to expand very quickly and at a minimal cost. For testing purposes you could do without the SSD drives. I''m looking at building a 128 core cloud to teach cloud computing using the same types of replaceable hosts. The cores will be divided up into smaller 16 core clouds each given to team of 4 students. I think it would all depend on what you plan on doing with your cloud. Grant McWilliams http://grantmcwilliams.com/ Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>I'm sure there will be various opinions on this but after going down the "really expensive but rarely upgradable" path I just build commodity servers for XCP now. Each host has a 2u case, hexicore cpu, 16 GB of ram, a local drive that's currently used for nothing much and two network interfaces. With each I buy a 240 GB SSD that goes in the SAN. So for $1000 I can drop one machine into the rack, plug in power, both network cables and slide the SSD into the SAN box and I've just expanded capacity by 30 VMs. Each VM gets 512 MB of ram and 7 GB of storage space. Any other storage can be pulled from larger disk based SAN shares via iSCSI or NFS. This allows me to expand very quickly and at a minimal cost.>For testing purposes you could do without the SSD drives. I'm looking at building a 128 core cloud to teach cloud computing using the same types of replaceable hosts. The cores will be divided up into smaller 16 core clouds each given to team of 4 students.>I think it would all depend on what you plan on doing with your cloud.>Grant McWilliams >http://grantmcwilliams.com/>Some people, when confronted with a problem, think "I know, I'll use Windows." >Now they have two problems.I agree with that philosophy, but what how do you ensure compatibility both with the XCP software, and with each other? I started with the XenServer HCL that Citrix provides, and it led me to believe it was fairly strict on what would work. (Or at least what they were willing to support, which I know is different) When you say "build commodity servers" are you referring to something "standard" from Dell, HP, IBM or do you mean "build" as in buy a case, and a board/CPU/RAM, hard disks etc? What you describe sounds like a dream compared to where we are currently. To expand capacity by that same amount we'd be looking at 10x the cost for hardware and software. Brett Westover _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Wed, Oct 26, 2011 at 1:51 PM, Brett Westover <bwestover@pletter.com>wrote:> >I''m sure there will be various opinions on this but after going down the > "really expensive but rarely upgradable" path I just build commodity servers > for XCP now. Each host has a 2u case, hexicore cpu, 16 GB of ram, a local > drive that''s currently used for nothing much and two network interfaces. > With each I buy a 240 GB SSD that goes in the SAN. So for $1000 I can drop > one machine into the rack, plug in power, both network cables and slide the > SSD into the SAN box and I''ve just expanded capacity by 30 VMs. Each VM gets > 512 MB of ram and 7 GB of storage space. Any other storage can be pulled > from larger disk based SAN shares via iSCSI or NFS. This allows me to expand > very quickly and at a minimal cost. > > >For testing purposes you could do without the SSD drives. I''m looking at > building a 128 core cloud to teach cloud computing using the same types of > replaceable hosts. The cores will be divided up into smaller 16 core clouds > each given to team of 4 students. > > >I think it would all depend on what you plan on doing with your cloud. > > > >Grant McWilliams > >http://grantmcwilliams.com/ > > > I agree with that philosophy, but what how do you ensure compatibility both > with the XCP software, and with each other? > > I started with the XenServer HCL that Citrix provides, and it led me to > believe it was fairly strict on what would work. > (Or at least what they were willing to support, which I know is different) > > It''s just CentOS so it works with most pieces of hardware thatRHEL5/CentOS5 works with. Back in the .5 beta days I found things that XCP wouldn''t work on that CentOS did but anymore with 1.1 beta and newer it seems to be more equal. So basically grab a motherboard off the shelf and it will probably work fine. However, having said that pick your hardware wisely just like you would any other. Just because XCP runs on it doesn''t mean the design is stable enough for your project. I feel that I have a bit more flexibility than I did with stock Xen because it''s so easy to set up a system that can migrate with XCP. This allows me to be a bit more aggressive with my hardware.> When you say "build commodity servers" are you referring to something > "standard" from Dell, HP, IBM or do you mean "build" as in buy a case, and a > board/CPU/RAM, hard disks etc? >I started out with purpose built rackmount systems that ran about $500 per core. Not super expensive big iron stuff but definitely server hardware. Now I buy off the shelf rackmount cases (with no drive bays), motherboards, CPUs and ram. The only thing it needs on the motherboard is lots of cores, lots of ram and a network interface or two. That and the design has to be sound enough not to be flaky. So far this is working.> What you describe sounds like a dream compared to where we are currently. > To expand capacity by that same amount we''d be looking at 10x the cost for > hardware and software. > > Brett Westover > > >I got tired of ending up with hardware that costs too much to upgrade (damn you Intel) so now I buy hardware that''s cheap enough to replace. The main concern however is you need to think long and hard about your CPU choice because pools like to have the same CPU in them. I like more less powerful cores over less more powerful cores so I''m building a lot of AMD hexicore systems. The new cloud will probably be AMD 8 core boards. Grant McWilliams http://grantmcwilliams.com/ Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>I got tired of ending up with hardware that costs too much to upgrade (damn you Intel) so now I buy hardware that's cheap enough to replace. The main concern however is you need to think long and hard about your CPU choice because pools like to have the same CPU in them. I like more less powerful cores over less more powerful cores so I'm building a lot of AMD hexicore systems. The new cloud will probably be AMD 8 core boards.Well I really like that idea, and it serves two purposes for us. It would be a shift to a more efficient cost model, and its much more scalable. Now we have to fit the purchase to our requirements, and then make entirely separate plans when it comes time to upgrade. This model scales from my "cheap and easy" test requirements, all the way up to a large production cloud. One question, when you DO need to upgrade to the next generation of processor, does that just become a separate pool? So you're kind of stuck with the CPU type you've selected, for that whole pool forever... but then come time to build a new cloud, you can make a different choice if you're requirements have changed, or the market has moved on whichever comes first. You have to build in the redundancy you require to the new cloud, and then do cold migrations of your workload to the new cloud. Does that sound right? Another question, how specific are you willing to get on hardware? I am literally looking to build a parts list in the next few days, and I would love to swap notes and get your opinion. (anyone's opinion in fact, though it seems I'm mostly talking to Grant here).>"... a local drive that's currently used for nothing much..."Last question, I'm guessing you slap a single local disk into each server just to boot the OS. What are your thoughts on using a flash disk instead? If the OS is not particularly write heavy, it would seem that this would save on cooling and power. We currently use Debian based "routers" which are really just commodity servers with 2 nics and flash disks to boot the OS image. Would XCP run well that way, or is it more dependent on its local disk? Thank you for all your input. Brett Westover. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Thu, Oct 27, 2011 at 10:07 AM, Brett Westover <bwestover@pletter.com>wrote:> >I got tired of ending up with hardware that costs too much to upgrade > (damn you Intel) so now I buy hardware that''s cheap enough to replace. The > main concern however is you need to think long and hard about your CPU > choice because pools like to have the same CPU in them. I like more less > powerful cores over less more powerful cores so I''m building a lot of AMD > hexicore systems. The new cloud will probably be AMD 8 core boards. > > Well I really like that idea, and it serves two purposes for us. It would > be a shift to a more efficient cost model, and its much more scalable. Now > we have to fit the purchase to our requirements, and then make entirely > separate plans when it comes time to upgrade. This model scales from my > "cheap and easy" test requirements, all the way up to a large production > cloud. > > One question, when you DO need to upgrade to the next generation of > processor, does that just become a separate pool? So you''re kind of stuck > with the CPU type you''ve selected, for that whole pool forever... but then > come time to build a new cloud, you can make a different choice if you''re > requirements have changed, or the market has moved on whichever comes first. > You have to build in the redundancy you require to the new cloud, and then > do cold migrations of your workload to the new cloud. Does that sound right? > >Pretty much although there''s a certain amount of wiggle room. However unless you absolutely need all your VMs in the same pool I''d just add the new hardware and make a new pool out it. I had to do that when I went from Quad core Xeons to hexicore AMDs. The nice thing about commodity hardware is replacing equipment is fairly painless on the wallet.> Another question, how specific are you willing to get on hardware? I am > literally looking to build a parts list in the next few days, and I would > love to swap notes and get your opinion. (anyone''s opinion in fact, though > it seems I''m mostly talking to Grant here). >I''ll give you the entire list if you''d like. My company also creates clouds for other companies of reasonable size. I have the 500 node cloud in design and a project twice that size after that. So if you folks get yourself into a bind I can bail you out on your dime. :-)> > >"... a local drive that''s currently used for nothing much..." > Last question, I''m guessing you slap a single local disk into each server > just to boot the OS. What are your thoughts on using a flash disk instead? > If the OS is not particularly write heavy, it would seem that this would > save on cooling and power. We currently use Debian based "routers" which are > really just commodity servers with 2 nics and flash disks to boot the OS > image. Would XCP run well that way, or is it more dependent on its local > disk? > >The OS doesn''t do a whole lot although with local SR caching this may change. My local disks just sit but to get the cost of each node down further I''ll be investigating PXE booting the node. However I mentioned local SR caching which could be the fly in the ointment.> Thank you for all your input. > > Brett Westover. > >Grant McWilliams http://grantmcwilliams.com/ Some people, when confronted with a problem, think "I know, I''ll use Windows." Now they have two problems. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users