Hello Everyone, Was wondering what people are running these days, and how do they compare to the 10,000 dollar SAN boxes. We are looking to build a fiber san using IET and glusterFS, and was wondering what kind of luck people where having using this approach, or any for that matter. Kind Regards, Nick. _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
On Tue, 11 Jun 2013 10:27:40 -0400, Nick Khamis <symack@gmail.com> wrote:> Hello Everyone, > > Was wondering what people are running these days, and how do they > compare to the 10,000 dollar SAN boxes. We are looking to build a > fiber san using IET and glusterFS, and was wondering what kind of > luck > people where having using this approach, or any for that matter.A standalone SAN? What is your use case for GlusterFS? Gordan
On Tue, 11 Jun 2013 09:27:40 -0500, Nick Khamis <symack@gmail.com> wrote:> 10,000 dollar SAN boxesWhere are you getting SAN quotes for a mere $10,000? I hope you missed a zero...
There isn''t really. Would ext3/4 suffice? What would be a good in between for performance vs. stability. GlusterFS could be used to replicate the drives. We would use corosync with pacemaker for failover. DRBD could have been used for replication however, last I checked there was a 4TB limit. Kind Regards, Nick. _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
Hey Nick - just curious and not trying to split hairs - but what $10000 san? I''ve seen and build what I''d call a "NAS" in that price range, but in my mind (maybe not in the formal definition though) , SAN is more - like a management gui - the ability to manage snapshots, backups, etc. often managed in multiple chassis - it''s more a multi device network isn''t it? I guess the line is blurring though... We use some linux boxes running DRDB - lots of people seem to be going that route anecdotally speaking. But lots of tiny points seem to affect performance. -----Original Message----- From: xen-users-bounces@lists.xen.org [mailto:xen-users-bounces@lists.xen.org] On Behalf Of Gordan Bobic Sent: June 11, 2013 7:38 AM To: Nick Khamis Cc: xen-users Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN On Tue, 11 Jun 2013 10:27:40 -0400, Nick Khamis <symack@gmail.com> wrote:> Hello Everyone, > > Was wondering what people are running these days, and how do they > compare to the 10,000 dollar SAN boxes. We are looking to build a > fiber san using IET and glusterFS, and was wondering what kind of luck > people where having using this approach, or any for that matter.A standalone SAN? What is your use case for GlusterFS? Gordan _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
On Tue, 11 Jun 2013 10:48:08 -0400, Nick Khamis <symack@gmail.com> wrote:> There isn''t really. Would ext3/4 suffice? What would be a good in > between for performance vs. stability. GlusterFS could be used to > replicate the drives. We would use corosync with pacemaker for > failover. > > DRBD could have been used for replication however, last I checked > there was a 4TB limit.You don''t have to replicate the whole pool in one DRBD device. Set up a mirror pair of disks over DRBD, one DRBD device per disk. I''d probably put something like ZFS on top to glue together the DRBD devices and export zvols over iSCSI. I''m using a setup similar to that, only I use daily zfs send/receive (it''s incremental) to the mirror SAN because the mirror SAN is at a different physical location so bandwdith usage is prohibitive. Gordan
On Tue, 11 Jun 2013 09:43:31 -0500, "Mark Felder" <feld@feld.me> wrote:> On Tue, 11 Jun 2013 09:27:40 -0500, Nick Khamis <symack@gmail.com> > wrote: > >> 10,000 dollar SAN boxes > > Where are you getting SAN quotes for a mere $10,000? I hope you > missed a zero...Depends on the SAN. I just saw an advert on my gmail account advertising a 67TB SAN for $20K. Gordan
Hello Mark, Thank you so much for your response, our pricing for the HP P2000 G3 FC last I checked was 12,500. Which would be more than enough however, just considering our options. What has worked in the past, what is proven etc... N. _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
On Tue, 11 Jun 2013 14:49:24 +0000, "mitch@bitblock.net" <mitch@bitblock.net> wrote:> Hey Nick - just curious and not trying to split hairs - but what > $10000 san? > > I''ve seen and build what I''d call a "NAS" in that price range, but in > my mind (maybe not in the formal definition though) , SAN is more - > like a management gui - the ability to manage snapshots, backups, > etc. > often managed in multiple chassis - it''s more a multi device network > isn''t it?No. NAS works on FS level (e.g. NFS, CIFS, GlusterFS). SAN works on block device level (iSCSI, AoE, at a push DRBD, NBD). Think network attached disk as opposed to network attached file system. None of the features you mentioned are specific to a NAS vs. SAN - you can get either with them (or without them).> I guess the line is blurring though... We use some linux boxes > running DRDB - lots of people seem to be going that route anecdotally > speaking. But lots of tiny points seem to affect performance.No more so than on any storage system, DAS included. Most admins, including experienced and competent ones, have never thought about implications of alignment of structures throughout the storage stack. The profile of the issue has only been raised slightly recently with the introduction of disks with 4KB sectors, but even that only covers one particular layer of the stack, whereas similar issues apply throughout all layers of the stack (e.g. RAID below the file system, application above, and sometimes other factors as well). Have a read here to get the basic gist of it: http://www.altechnative.net/2010/12/31/disk-and-file-system-optimisation/ In different setups (e.g. ZFS), some of this applies differently - the only way to get it right is to actually understand what is going on in every layer - i.e. you have to be a "full stack engineer". Gordan
Hello Everyone, I am speaking for everyone when saying that we are really interested in knowing what people are using in deployment. This would be active/active replicated, block level storage solutions at the: NAS Level: FreeNAS, OpenFiler (I know it''s not linux), IET FS Level: ZFS, OCFS/2, GFS/2, GlusterFS Replication Level: DRBD vs GlusterFS Cluster Level: OpenAIS with Pacemaker etc... Our hope is for an educated breakdown (i.e., comparisons, benefits, limitation) of different setups, as opposed to a war of words on which NAS solution is better than the other. Comparing black boxes would also be interesting at a performance level. Talk about pricing, not so much since we already know that they cost and arm and a leg. Kind Regards, Nick. _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
On Tue, Jun 11, 2013 at 11:30 AM, Nick Khamis <symack@gmail.com> wrote:> Hello Everyone, > > I am speaking for everyone when saying that we are really interested in > knowing what people are > using in deployment. This would be active/active replicated, block level > storage solutions at the: > > NAS Level: FreeNAS, OpenFiler (I know it''s not linux), IET > FS Level: ZFS, OCFS/2, GFS/2, GlusterFS > Replication Level: DRBD vs GlusterFS > Cluster Level: OpenAIS with Pacemaker etc... > > Our hope is for an educated breakdown (i.e., comparisons, benefits, > limitation) of different setups, as opposed to > a war of words on which NAS solution is better than the other. Comparing > black boxes would also be interesting > at a performance level. Talk about pricing, not so much since we already > know that they cost and arm and a leg. > > Kind Regards, > > Nick. >There was actually one more level I left out Hardware Level: PCIe bus (8x 16x V2 etc..), Interface cards (FC and RJ), SAS (Seagate vs WD) I hope this thread takes off, and individuals interested in the same topic can get some really valuable info. On a side note, and interesting comment I received was on the risks that are associated with such a custom build, as well as the lack of flexibility in some sense. We would not build a whitebox for this setup, and would adivse against it as well. Our approach will be to purchase an IBM, SupwerMicro or whatever with sufficient bays, processing power and PCI bus. It would be good to discuss what has not worked in the past. How some of the replication level technologies flopped in some sense. For example how FreeNAS has limited support for clustering, or how high availability OpenFiler instances scale with very large storage instances etc... There is also SCST which i''ve heard about before but did not diverge into it very much. Anyone know how this can fit in a SAN? N. _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
On Tue, 06/11/2013 10:27 AM, Nick Khamis <symack@gmail.com> wrote:> Hello Everyone, > > Was wondering what people are running these days, and how do they compare > to the 10,000 dollar SAN boxes. We are looking to build a fiber san using > IET and glusterFS, and was wondering what kind of luck people where having > using this approach, or any for that matter. > > Kind Regards, > > Nick. >I''ve built a number of white box SANs using everything from OpenSolaris and COMSTAR, Open-E, OpenFiler, SCST, IET... etc.using iSCSI and FC. I''ve settled Ubuntu boxes booted via DRBD running SCST OR ESOS. From a performance perspective, I have pretty large customer that two XCP pools running off a Dell MD3200F using 4GB FC. To compare, I took a Dell 2970 or something like that, stuck 8 Seatgate 2.5" Constellation Drives in it, a 4GB HBA and installed ESOS on it. I never got around to finishing my testing, but the ESOS box can definitely keep up and things like LSI cachecade would really help to bring it to a more enterprise-level performance with respect to random reads and writes. Lastly, there is such an abundance of DIRT CHEAP, lightly used 4GB FC equipment on the market today that I find it interesting that people still prefer iSCSI. iSCSI is good if you have 10GBE which is still far to expensive per port IMO. However, you can get 2 - 4 port, 4GB FC Hbas on ebay for under 100 bucks and I generally am able to purchase fully loaded switches (brocade 200e) for somewhere in the neighborhood of 300 bucks each! MPIO with 2 FC ports from an initiator to a decent target can easily saturate the link on basic sequential r/w write tests. Not to mention, improved latency, access times, etc for random i/o.
> I''ve settled Ubuntu boxes booted via DRBD running SCST OR ESOS. > From a performance perspective, I have pretty large customer that two XCP pools running off a Dell MD3200F using 4GB FC. To compare, I took a Dell 2970 or something like that, stuck 8 Seatgate 2.5" Constellation Drives in it, a 4GB HBA and installed ESOS on it. > I never got around to finishing my testing, but the ESOS box can definitely keep up and things like LSI cachecade would really help to bring it to a more enterprise-level performance with respect to random reads and writes. > Lastly, there is such an abundance of DIRT CHEAP, lightly used 4GB FC equipment on the market today that I find it interesting that people still prefer iSCSI. iSCSI is good if you have 10GBE which is still far to expensive per port IMO. However, you can get 2 - 4 port, 4GB FC Hbas on ebay for under 100 bucks and I generally am able to purchase fully loaded switches (brocade 200e) for somewhere in the neighborhood of 300 bucks each! > MPIO with 2 FC ports from an initiator to a decent target can easily saturate the link on basic sequential r/w write tests. Not to mention, improved latency, access times, etc for random i/o.Correction.. This was an MD3600F not 3200. Sorry. 12 Drives, Dual controllers.
On Tue, Jun 11, 2013 at 12:33 PM, Errol Neal <eneal@businessgrade.com>wrote:> On Tue, 06/11/2013 10:27 AM, Nick Khamis <symack@gmail.com> wrote: > > Hello Everyone, > > > > Was wondering what people are running these days, and how do they compare > > to the 10,000 dollar SAN boxes. We are looking to build a fiber san using > > IET and glusterFS, and was wondering what kind of luck people where > having > > using this approach, or any for that matter. > > > > Kind Regards, > > > > Nick. > > > > I''ve built a number of white box SANs using everything from OpenSolaris > and COMSTAR, Open-E, OpenFiler, SCST, IET... etc.using iSCSI and FC. > I''ve settled Ubuntu boxes booted via DRBD running SCST OR ESOS. > From a performance perspective, I have pretty large customer that two XCP > pools running off a Dell MD3200F using 4GB FC. To compare, I took a Dell > 2970 or something like that, stuck 8 Seatgate 2.5" Constellation Drives in > it, a 4GB HBA and installed ESOS on it. > I never got around to finishing my testing, but the ESOS box can > definitely keep up and things like LSI cachecade would really help to bring > it to a more enterprise-level performance with respect to random reads and > writes. > Lastly, there is such an abundance of DIRT CHEAP, lightly used 4GB FC > equipment on the market today that I find it interesting that people still > prefer iSCSI. iSCSI is good if you have 10GBE which is still far to > expensive per port IMO. However, you can get 2 - 4 port, 4GB FC Hbas on > ebay for under 100 bucks and I generally am able to purchase fully loaded > switches (brocade 200e) for somewhere in the neighborhood of 300 bucks each! > MPIO with 2 FC ports from an initiator to a decent target can easily > saturate the link on basic sequential r/w write tests. Not to mention, > improved latency, access times, etc for random i/o. >Hello Eneal, Thank you so much for your response. Did you experience any problems with ESOS and your FS SAN in terms of stability. We already have our myrinet FC cards and switches, and I agree, it was dirt cheap. Kind Regards, Nick. _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
On Tue, 11 Jun 2013 12:29:22 -0400, Nick Khamis <symack@gmail.com> wrote:> On Tue, Jun 11, 2013 at 11:30 AM, Nick Khamis wrote: > > Hello Everyone, > > I am speaking for everyone when saying that we are really interested > in knowing what people are > using in deployment. This would be active/active replicated, block > level storage solutions at the: > > NAS Level: FreeNAS, OpenFiler (I know it''s not linux), IET > FS Level: ZFS, OCFS/2, GFS/2, GlusterFS > Replication Level: DRBD vs GlusterFS > Cluster Level: OpenAIS with Pacemaker etc... > > Our hope is for an educated breakdown (i.e., comparisons, benefits, > limitation) of different setups, as opposed to > a war of words on which NAS solution is better than the other. > Comparing black boxes would also be interesting > at a performance level. Talk about pricing, not so much since we > already know that they cost and arm and a leg. > > Kind Regards, > > Nick. > > There was actually one more level I left out > > Hardware Level: PCIe bus (8x 16x V2 etc..), Interface cards (FC and > RJ), SAS (Seagate vs WD) > > I hope this thread takes off, and individuals interested in the same > topic can get some really valuable info. > > On a side note, and interesting comment I received was on the risks > that are associated with such a custom build, as > well as the lack of flexibility in some sense.The risk issue I might entertain to some extent (although personally I think the risk is LOWER if you built the system yourself and you have it adequately mirrored and backed up - if something goes wrong you actually understand how it all hangs together and can fix it yourself quickly, as opposed to hours of downtime while an engineer on the other end of the phone tries to guess what is actually wrong). But the flexibility argument is completely bogus. If you are building the solution yourself you have the flexibility to do whatever you want. When you buy and off the shelf all-in-one-black-box appliance you are straitjacketed by whatever somebody else decided might be useful without any specific insight into your particular use case. Gordan
On Tue, Jun 11, 2013 at 12:52 PM, Gordan Bobic <gordan@bobich.net> wrote:> > The risk issue I might entertain to some extent (although > personally I think the risk is LOWER if you built the system > yourself and you have it adequately mirrored and backed up - if > something goes wrong you actually understand how it all hangs > together and can fix it yourself quickly, as opposed to hours > of downtime while an engineer on the other end of the phone > tries to guess what is actually wrong). >Very True!! But apples vs apples. It comes down to the warranty on your iscsi raid controller, cpu etc.. vs. whatever guts are in the powervault. And I agree with both trains of thoughts... Warranty through adaptec or Dell, in either case there will be downtime.> > But the flexibility argument is completely bogus. If you are > building the solution yourself you have the flexibility to do > whatever you want. When you buy and off the shelf > all-in-one-black-box appliance you are straitjacketed by > whatever somebody else decided might be useful without any > specific insight into your particular use case. > > Gordan >For sure... The inflexibility I was referring to are instance where one starts out an endeavour to build a replicated NAS, and finds out the hard way regarding size limitations of DRBD, lack of clustering capabilities of FreeNAS, or instability issues of OpenFiler with large instances. There is also SCSI-3 persistent reservations issues which is needed by some of the virtualization systems that may of may not be supported by FreeNAS (last I checked)... N. N. _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
On Tue, 11 Jun 2013 13:03:12 -0400, Nick Khamis <symack@gmail.com> wrote:> On Tue, Jun 11, 2013 at 12:52 PM, Gordan Bobic wrote: > > The risk issue I might entertain to some extent (although > personally I think the risk is LOWER if you built the system > yourself and you have it adequately mirrored and backed up - if > something goes wrong you actually understand how it all hangs > together and can fix it yourself quickly, as opposed to hours > of downtime while an engineer on the other end of the phone > tries to guess what is actually wrong). > > Very True!! > > But apples vs apples. It comes down to the warranty on your > iscsi raid controller, cpu etc.. vs. whatever guts are in the > powervault. And I agree with both trains of thoughts... > Warranty through adaptec or Dell, in either case there > will be downtime.If you build it yourself you will save enough money that you can have 5 of everything sitting on the shelf for spares. And it'll all still be covered by a warranty.> But the flexibility argument is completely bogus. If you are > building the solution yourself you have the flexibility to do > whatever you want. When you buy and off the shelf > all-in-one-black-box appliance you are straitjacketed by > whatever somebody else decided might be useful without any > specific insight into your particular use case. > > For sure... The inflexibility I was referring to are instance where > one starts out an endeavour to build a replicated NAS, and finds > out the hard way regarding size limitations of DRBD, lack of > clustering capabilities of FreeNAS, or instability issues of > OpenFiler with large instances.Heavens forbid we should do some research, prototyping and testing before building the whole solution... It ultimately comes down to what your time is worth and how much you are saving. If you are looking to deploy 10 storage boxes at $10K each vs. $50K each, you can spend a year prototyping and testing and still save a fortune. If you only need one, it may or may not be worthwhile depending on your hourly rate. Gordan _______________________________________________ Xen-users mailing list Xen-users@lists.xen.org http://lists.xen.org/xen-users
> > I''ve built a number of white box SANs using everything from OpenSolaris > > and COMSTAR, Open-E, OpenFiler, SCST, IET... etc.using iSCSI and FC. > > I''ve settled Ubuntu boxes booted via DRBD running SCST OR ESOS. > > From a performance perspective, I have pretty large customer that two XCP > > pools running off a Dell MD3200F using 4GB FC. To compare, I took a Dell > > 2970 or something like that, stuck 8 Seatgate 2.5" Constellation Drives in > > it, a 4GB HBA and installed ESOS on it. > > I never got around to finishing my testing, but the ESOS box can > > definitely keep up and things like LSI cachecade would really help to bring > > it to a more enterprise-level performance with respect to random reads and > > writes. > > Lastly, there is such an abundance of DIRT CHEAP, lightly used 4GB FC > > equipment on the market today that I find it interesting that people still > > prefer iSCSI. iSCSI is good if you have 10GBE which is still far to > > expensive per port IMO. However, you can get 2 - 4 port, 4GB FC Hbas on > > ebay for under 100 bucks and I generally am able to purchase fully loaded > > switches (brocade 200e) for somewhere in the neighborhood of 300 bucks each! > > MPIO with 2 FC ports from an initiator to a decent target can easily > > saturate the link on basic sequential r/w write tests. Not to mention, > > improved latency, access times, etc for random i/o. > > > > Hello Eneal, > > Thank you so much for your response. Did you experience any problems with > ESOS and your FS SAN in terms of stability. > We already have our myrinet FC cards and switches, and I agree, it was dirt > cheap. >ESOS by all means is not perfect. I''m running an older release because it''s impossible to upgrade a production system without downtime using ESOS (currently) but I was impressed with it non the less and i can see where it''s going. I think what has worked better for me is using SCST on Ubuntu. As long as your hardware is stable, you should have no issues. At another site, I have two boxes in production (running iSCSI at this site) and I''ve had zero non-hardware-related issues and I''ve been running them in prod for 1 - 2 years.
>> ESOS by all means is not perfect. I''m running an older release because it''s impossible to >> upgrade a production system without downtime using ESOS (currently) but I was >> impressed with it non the less and i can see where it''s going.Thanks again Errol. Just our of curiosity was any of this replicated? N.
On Tue, 06/11/2013 01:13 PM, Gordan Bobic <gordan@bobich.net> wrote:> > Heavens forbid we should do some research, prototyping and > testing before building the whole solution... > > It ultimately comes down to what your time is worth and > how much you are saving. If you are looking to deploy 10 > storage boxes at $10K each vs. $50K each, you can spend > a year prototyping and testing and still save a fortune. > If you only need one, it may or may not be worthwhile > depending on your hourly rate.This is a really key point. I don''t like to toot my own horn, but I''ve done EXTENSIVE and EXHAUSTIVE research into this. I built my first Open-E iSCSI box in like 2006. The right combination of hard disk, hdd firmware, raid controller, controller firmware, motherboard, memory, cpu, nics, hbas.. everything is critical and by the time you narrow all of this down and test sufficiently and are ready to go into production, you''ve spent a significant amount of time and money. Now that said, if you able to piggy back off the knowledge of others, then you get a nice short cut and to be fair, the open source software has advanced and matured so much that it''s really production ready for certain workloads and environments.
On 6/11/13, Gordan Bobic <gordan@bobich.net> wrote:> Heavens forbid we should do some research, prototyping and > testing before building the whole solution... > > It ultimately comes down to what your time is worth and > how much you are saving. If you are looking to deploy 10 > storage boxes at $10K each vs. $50K each, you can spend > a year prototyping and testing and still save a fortune. > If you only need one, it may or may not be worthwhile > depending on your hourly rate. > > Gordan >And hence the purpose of this thread :). Gordon, you mentioned that you did use DRBD for separate instances outside of the NAS. I am curious to know of your experience with NAS level replication. What you feel would be a more stable and scalable fit. N.
Gordan, sorry for the typo! N.
On Tue, 06/11/2013 01:23 PM, Nick Khamis <symack@gmail.com> wrote:> >> ESOS by all means is not perfect. I''m running an older release because it''s impossible to > >> upgrade a production system without downtime using ESOS (currently) but I was > >> impressed with it non the less and i can see where it''s going. > > Thanks again Errol. Just our of curiosity was any of this replicated? >That is my next step. I had been planning of using Ininiband, SDP and DRBD, but there are some funky issues there. I just never got around to it. I think what''s necessary over replication is a dual head configuration. A combination of RAID1, CLVM, Pacemaker, SCST and shared storage between two nodes should suffice.
> Now that said, if you able to piggy back off the knowledge of others, then > you get a nice short cut and to be fair, the open source software has > advanced and matured so much that it''s really production ready for certain > workloads and environments. >We run our BGP links on Quagga linux boxes on IBM machines and transmitting an average of 700Mbps with packet sizes upwards of 900-1000 bytes. I don''t loose sleep over them.... N.
On 06/11/2013 06:27 PM, Nick Khamis wrote:> On 6/11/13, Gordan Bobic <gordan@bobich.net> wrote: >> Heavens forbid we should do some research, prototyping and >> testing before building the whole solution... >> >> It ultimately comes down to what your time is worth and >> how much you are saving. If you are looking to deploy 10 >> storage boxes at $10K each vs. $50K each, you can spend >> a year prototyping and testing and still save a fortune. >> If you only need one, it may or may not be worthwhile >> depending on your hourly rate. >> >> Gordan >> > > And hence the purpose of this thread :). Gordon, you mentioned that > you did use DRBD > for separate instances outside of the NAS. I am curious to know of > your experience with NAS level replication. What you feel would be a > more stable and scalable fit.It largely depends on what exactly do you want to do with it. For a NAS, I use ZFS + lsyncd for near-synchronous replication (rsync-on-write). For a SAN I tend to use ZFS with zvols exported over iSCSI, with period ZFS send to the backup NAS. If you need real-time replication for fail-over purposes, I would probably run DRBD on top of ZFS zvols. Gordan
On 06/11/2013 06:28 PM, Errol Neal wrote:> On Tue, 06/11/2013 01:23 PM, Nick Khamis <symack@gmail.com> wrote: >>>> ESOS by all means is not perfect. I''m running an older release because it''s impossible to >>>> upgrade a production system without downtime using ESOS (currently) but I was >>>> impressed with it non the less and i can see where it''s going. >> >> Thanks again Errol. Just our of curiosity was any of this replicated? >> > > That is my next step. I had been planning of using Ininiband, > SDP and DRBD, but there are some funky issues there. I just > never got around to it.The first thing that jumps out at me here is infiniband. Do you have the infrastructure and cabling in place to actually do that? This can be very relevant depending on your environment. If you are planning to get some cheap kit on eBay to do this, that''s all well and good, but will you be able to get a replacement if something breaks in a year or three? One nice thing about ethernet is that it will always be around, it will always be cheap, and it will always be compatible. For most uses multiple gigabit links bonded together are ample. Remember that you will get, on a good day, about 120 IOPS per disk. Assuming a typical 4K operation size that''s 480KB/s/disk. At 16KB/op that is still 1920KB/s/disk. At that rate you''d need 50 disks to saturate a single gigabit channel. And you can bond a bunch of them together for next to nothing in switch/NIC costs.> I think what''s necessary over replication is a dual head > configuration.Elaborate?> A combination of RAID1, CLVM, Pacemaker, SCST and shared storage > between two nodes should suffice.In what configuration? Gordan
Hello everyone, At my current workplace, we''ve been evaluating solutions from DDN vs. NetApp vs. in-house. The requirement was to have a low entry price for at least 1/3 PT storage, as a starting point, high IO/bandwidth, low latency and Hadoop compatibility, and target capacity of 1PT with further The DDN and NetApp solutions were all $300k+ with limited flexibility, overpriced replacement drives and limited expandability options. After evaluating our own solution on old hardware we had lying around, we''ve decided to give it a shot. There was obviously some risks, convincing management to sign the PO for $25k and explaining the risks and benefits, with a worst case scenario - using it as more traditional storage nodes. We''ve purchased 4 x 3U SuperMicro chassis with 36 x 3.5 HDDs and additional internal slots for OS drives. Along with few used $150 Infiniband 40Gig cards and IB switch (most expensive single piece of equipment here ~ $5-7k). The resulted 4 node GlusterFS cluster running over RDMA transport, ZFS bricks (10 HDD in raidz + 1 SSD cache + 1 spare), with 200 nano-second fabric latency, highly configurable replication (we use 3x) and flexible expandability. In out tests so far with this system, we''ve seen 18GB/sec fabric bandwidth, reading from all 3 replicas (which is what Gluster does when you replicate - it spreads IO) at 6GB/sec per replica. 6GB per second is pretty much the most you can squeeze out of 40GB Infiniband (aka QDR), but that was a sequential read test. However, by increasing number of Gluster nodes and bricks, you can achieve greater throughput for I suppose, you could do DRBD over RDMA (SDP or SuperSockets as per DRBD Docs: http://www.drbd.org/users-guide/s-replication-transports.html) if your environment requires it, over Gluster.. Infiniband is now part of the linux kernel, compare to few years ago.. and used hardware is not that expensive.. not much different from Fiber Channel. 56Gig (aka FDR) is also available, albeit more expensive.. Imho, Infiniband is going to become more relevant and universal in the upcoming years.. Cheers, Anastas S sysadmin++ On Wed, Jun 12, 2013 at 12:00 PM, <xen-users-request@lists.xen.org> wrote:> Send Xen-users mailing list submissions to > xen-users@lists.xen.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://lists.xen.org/cgi-bin/mailman/listinfo/xen-users > or, via email, send a message with subject or body ''help'' to > xen-users-request@lists.xen.org > > You can reach the person managing the list at > xen-users-owner@lists.xen.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Xen-users digest..." > > > Today''s Topics: > > 1. Re: Linux Fiber or iSCSI SAN (Gordan Bobic) > 2. Re: Linux Fiber or iSCSI SAN (Nick Khamis) > 3. Re: Linux Fiber or iSCSI SAN (Gordan Bobic) > 4. Re: Linux Fiber or iSCSI SAN (Errol Neal) > 5. Re: Linux Fiber or iSCSI SAN (Nick Khamis) > 6. Re: Linux Fiber or iSCSI SAN (Nick Khamis) > 7. Re: Linux Fiber or iSCSI SAN (Errol Neal) > 8. Re: Linux Fiber or iSCSI SAN (Nick Khamis) > 9. Re: Linux Fiber or iSCSI SAN (Errol Neal) > 10. Re: Linux Fiber or iSCSI SAN (Nick Khamis) > 11. Re: Linux Fiber or iSCSI SAN (Gordan Bobic) > 12. Re: Linux Fiber or iSCSI SAN (Gordan Bobic) > 13. Xen 4.1 compile from source and install on Fedora 17 > (ranjith krishnan) > 14. Re: Xen 4.1 compile from source and install on Fedora 17 (Wei Liu) > 15. pv assign pci device (jacek burghardt) > 16. Re: pv assign pci device (Gordan Bobic) > 17. Re: Blog: Installing the Xen hypervisor on Fedora 19 > (Dario Faggioli) > 18. Xen Test Day is today! (Dario Faggioli) > 19. Re: [Xen-devel] Xen Test Day is today! (Fabio Fantoni) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 11 Jun 2013 17:52:02 +0100 > From: Gordan Bobic <gordan@bobich.net> > To: Nick Khamis <symack@gmail.com> > Cc: xen-users <xen-users@lists.xensource.com> > Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN > Message-ID: > <1bdcfbd8f2994ee32483e1646fcbe5ec@mail.shatteredsilicon.net> > Content-Type: text/plain; charset=UTF-8; format=flowed > > On Tue, 11 Jun 2013 12:29:22 -0400, Nick Khamis <symack@gmail.com> > wrote: >> On Tue, Jun 11, 2013 at 11:30 AM, Nick Khamis wrote: >> >> Hello Everyone, >> >> I am speaking for everyone when saying that we are really interested >> in knowing what people are >> using in deployment. This would be active/active replicated, block >> level storage solutions at the: >> >> NAS Level: FreeNAS, OpenFiler (I know it''s not linux), IET >> FS Level: ZFS, OCFS/2, GFS/2, GlusterFS >> Replication Level: DRBD vs GlusterFS >> Cluster Level: OpenAIS with Pacemaker etc... >> >> Our hope is for an educated breakdown (i.e., comparisons, benefits, >> limitation) of different setups, as opposed to >> a war of words on which NAS solution is better than the other. >> Comparing black boxes would also be interesting >> at a performance level. Talk about pricing, not so much since we >> already know that they cost and arm and a leg. >> >> Kind Regards, >> >> Nick. >> >> There was actually one more level I left out >> >> Hardware Level: PCIe bus (8x 16x V2 etc..), Interface cards (FC and >> RJ), SAS (Seagate vs WD) >> >> I hope this thread takes off, and individuals interested in the same >> topic can get some really valuable info. >> >> On a side note, and interesting comment I received was on the risks >> that are associated with such a custom build, as >> well as the lack of flexibility in some sense. > > The risk issue I might entertain to some extent (although > personally I think the risk is LOWER if you built the system > yourself and you have it adequately mirrored and backed up - if > something goes wrong you actually understand how it all hangs > together and can fix it yourself quickly, as opposed to hours > of downtime while an engineer on the other end of the phone > tries to guess what is actually wrong). > > But the flexibility argument is completely bogus. If you are > building the solution yourself you have the flexibility to do > whatever you want. When you buy and off the shelf > all-in-one-black-box appliance you are straitjacketed by > whatever somebody else decided might be useful without any > specific insight into your particular use case. > > Gordan > > > > ------------------------------ > > Message: 2 > Date: Tue, 11 Jun 2013 13:03:12 -0400 > From: Nick Khamis <symack@gmail.com> > To: Gordan Bobic <gordan@bobich.net> > Cc: xen-users <xen-users@lists.xensource.com> > Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN > Message-ID: > <CAGWRaZZga+SuBc4iV0FO=D=HLthY=DNNJ-fuDEa1re8DQygZZA@mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > On Tue, Jun 11, 2013 at 12:52 PM, Gordan Bobic <gordan@bobich.net> wrote: > >> >> The risk issue I might entertain to some extent (although >> personally I think the risk is LOWER if you built the system >> yourself and you have it adequately mirrored and backed up - if >> something goes wrong you actually understand how it all hangs >> together and can fix it yourself quickly, as opposed to hours >> of downtime while an engineer on the other end of the phone >> tries to guess what is actually wrong). >> > > Very True!! > > But apples vs apples. It comes down to the warranty on your > iscsi raid controller, cpu etc.. vs. whatever guts are in the > powervault. And I agree with both trains of thoughts... > Warranty through adaptec or Dell, in either case there > will be downtime. > > >> >> But the flexibility argument is completely bogus. If you are >> building the solution yourself you have the flexibility to do >> whatever you want. When you buy and off the shelf >> all-in-one-black-box appliance you are straitjacketed by >> whatever somebody else decided might be useful without any >> specific insight into your particular use case. >> >> Gordan >> > > For sure... The inflexibility I was referring to are instance where > one starts out an endeavour to build a replicated NAS, and finds > out the hard way regarding size limitations of DRBD, lack of > clustering capabilities of FreeNAS, or instability issues of OpenFiler > with large instances. > > There is also SCSI-3 persistent reservations issues which is needed > by some of the virtualization systems that may of may not be supported > by FreeNAS (last I checked)... > > N. > > N. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: <http://lists.xen.org/archives/html/xen-users/attachments/20130611/3e06eae9/attachment.html> > > ------------------------------ > > Message: 3 > Date: Tue, 11 Jun 2013 18:13:58 +0100 > From: Gordan Bobic <gordan@bobich.net> > To: Nick Khamis <symack@gmail.com> > Cc: xen-users <xen-users@lists.xensource.com> > Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN > Message-ID: > <7d0db81b985d5c4e76781d94626b0cd9@mail.shatteredsilicon.net> > Content-Type: text/plain; charset=UTF-8; format=flowed > > On Tue, 11 Jun 2013 13:03:12 -0400, Nick Khamis <symack@gmail.com> > wrote: >> On Tue, Jun 11, 2013 at 12:52 PM, Gordan Bobic wrote: >> >> The risk issue I might entertain to some extent (although >> personally I think the risk is LOWER if you built the system >> yourself and you have it adequately mirrored and backed up - if >> something goes wrong you actually understand how it all hangs >> together and can fix it yourself quickly, as opposed to hours >> of downtime while an engineer on the other end of the phone >> tries to guess what is actually wrong). >> >> Very True!! >> >> But apples vs apples. It comes down to the warranty on your >> iscsi raid controller, cpu etc.. vs. whatever guts are in the >> powervault. And I agree with both trains of thoughts... >> Warranty through adaptec or Dell, in either case there >> will be downtime. > > If you build it yourself you will save enough money that you can > have 5 of everything sitting on the shelf for spares. And it''ll > all still be covered by a warranty. > >> But the flexibility argument is completely bogus. If you are >> building the solution yourself you have the flexibility to do >> whatever you want. When you buy and off the shelf >> all-in-one-black-box ?appliance you are straitjacketed by >> whatever somebody else decided might be useful without any >> specific insight into your particular use case. >> >> For sure... The inflexibility I was referring to are instance where >> one starts out an endeavour to build a replicated NAS, and finds >> out the hard way regarding size limitations of DRBD, lack of >> clustering capabilities of FreeNAS, or instability issues of >> OpenFiler with large instances. > > Heavens forbid we should do some research, prototyping and > testing before building the whole solution... > > It ultimately comes down to what your time is worth and > how much you are saving. If you are looking to deploy 10 > storage boxes at $10K each vs. $50K each, you can spend > a year prototyping and testing and still save a fortune. > If you only need one, it may or may not be worthwhile > depending on your hourly rate. > > Gordan > > > > ------------------------------ > > Message: 4 > Date: Tue, 11 Jun 2013 13:17:10 -0400 > From: Errol Neal <eneal@businessgrade.com> > To: Nick Khamis <symack@gmail.com> > Cc: "xen-users@lists.xensource.com" <xen-users@lists.xensource.com> > Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN > Message-ID: <1370971030353245500@businessgrade.com> > Content-Type: text/plain > > >> > I''ve built a number of white box SANs using everything from OpenSolaris >> > and COMSTAR, Open-E, OpenFiler, SCST, IET... etc.using iSCSI and FC. >> > I''ve settled Ubuntu boxes booted via DRBD running SCST OR ESOS. >> > From a performance perspective, I have pretty large customer that two XCP >> > pools running off a Dell MD3200F using 4GB FC. To compare, I took a Dell >> > 2970 or something like that, stuck 8 Seatgate 2.5" Constellation Drives in >> > it, a 4GB HBA and installed ESOS on it. >> > I never got around to finishing my testing, but the ESOS box can >> > definitely keep up and things like LSI cachecade would really help to bring >> > it to a more enterprise-level performance with respect to random reads and >> > writes. >> > Lastly, there is such an abundance of DIRT CHEAP, lightly used 4GB FC >> > equipment on the market today that I find it interesting that people still >> > prefer iSCSI. iSCSI is good if you have 10GBE which is still far to >> > expensive per port IMO. However, you can get 2 - 4 port, 4GB FC Hbas on >> > ebay for under 100 bucks and I generally am able to purchase fully loaded >> > switches (brocade 200e) for somewhere in the neighborhood of 300 bucks each! >> > MPIO with 2 FC ports from an initiator to a decent target can easily >> > saturate the link on basic sequential r/w write tests. Not to mention, >> > improved latency, access times, etc for random i/o. >> > >> >> Hello Eneal, >> >> Thank you so much for your response. Did you experience any problems with >> ESOS and your FS SAN in terms of stability. >> We already have our myrinet FC cards and switches, and I agree, it was dirt >> cheap. >> > > ESOS by all means is not perfect. I''m running an older release because it''s impossible to upgrade a production system without downtime using ESOS (currently) but I was impressed with it non the less and i can see where it''s going. > I think what has worked better for me is using SCST on Ubuntu. As long as your hardware is stable, you should have no issues. > At another site, I have two boxes in production (running iSCSI at this site) and I''ve had zero non-hardware-related issues and I''ve been running them in prod for 1 - 2 years. > > > > > ------------------------------ > > Message: 5 > Date: Tue, 11 Jun 2013 13:23:05 -0400 > From: Nick Khamis <symack@gmail.com> > To: eneal@businessgrade.com > Cc: "xen-users@lists.xensource.com" <xen-users@lists.xensource.com> > Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN > Message-ID: > <CAGWRaZbBjH_bMS-Zgd-qN8f5b8zey2ng-ZZaGZ8QUkoaiKZ+XQ@mail.gmail.com> > Content-Type: text/plain; charset=ISO-8859-1 > >>> ESOS by all means is not perfect. I''m running an older release because it''s impossible to >>> upgrade a production system without downtime using ESOS (currently) but I was >>> impressed with it non the less and i can see where it''s going. > > Thanks again Errol. Just our of curiosity was any of this replicated? > > N. > > > > ------------------------------ > > Message: 6 > Date: Tue, 11 Jun 2013 13:27:24 -0400 > From: Nick Khamis <symack@gmail.com> > To: Gordan Bobic <gordan@bobich.net> > Cc: xen-users <xen-users@lists.xensource.com> > Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN > Message-ID: > <CAGWRaZbY4uqZaq5b-CWam27vG_3K=qQnZBOcM5F_7UV3jya_qw@mail.gmail.com> > Content-Type: text/plain; charset=ISO-8859-1 > > On 6/11/13, Gordan Bobic <gordan@bobich.net> wrote: >> Heavens forbid we should do some research, prototyping and >> testing before building the whole solution... >> >> It ultimately comes down to what your time is worth and >> how much you are saving. If you are looking to deploy 10 >> storage boxes at $10K each vs. $50K each, you can spend >> a year prototyping and testing and still save a fortune. >> If you only need one, it may or may not be worthwhile >> depending on your hourly rate. >> >> Gordan >> > > And hence the purpose of this thread :). Gordon, you mentioned that > you did use DRBD > for separate instances outside of the NAS. I am curious to know of > your experience with NAS level replication. What you feel would be a > more stable and scalable fit. > > N. > > > > ------------------------------ > > Message: 7 > Date: Tue, 11 Jun 2013 13:26:00 -0400 > From: Errol Neal <eneal@businessgrade.com> > To: Gordan Bobic <gordan@bobich.net> > Cc: "xen-users@lists.xensource.com" <xen-users@lists.xensource.com>, > Nick Khamis <symack@gmail.com> > Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN > Message-ID: <1370971560963286500@businessgrade.com> > Content-Type: text/plain > > On Tue, 06/11/2013 01:13 PM, Gordan Bobic <gordan@bobich.net> wrote: >> >> Heavens forbid we should do some research, prototyping and >> testing before building the whole solution... >> >> It ultimately comes down to what your time is worth and >> how much you are saving. If you are looking to deploy 10 >> storage boxes at $10K each vs. $50K each, you can spend >> a year prototyping and testing and still save a fortune. >> If you only need one, it may or may not be worthwhile >> depending on your hourly rate. > > This is a really key point. I don''t like to toot my own horn, but I''ve done EXTENSIVE and EXHAUSTIVE research into this. I built my first Open-E iSCSI box in like 2006. The right combination of hard disk, hdd firmware, raid controller, controller firmware, motherboard, memory, cpu, nics, hbas.. everything is critical and by the time you narrow all of this down and test sufficiently and are ready to go into production, you''ve spent a significant amount of time and money. > Now that said, if you able to piggy back off the knowledge of others, then you get a nice short cut and to be fair, the open source software has advanced and matured so much that it''s really production ready for certain workloads and environments. > > > > ------------------------------ > > Message: 8 > Date: Tue, 11 Jun 2013 13:27:55 -0400 > From: Nick Khamis <symack@gmail.com> > To: Gordan Bobic <gordan@bobich.net> > Cc: xen-users <xen-users@lists.xensource.com> > Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN > Message-ID: > <CAGWRaZYxn6y5D-q3HnTo-H92NyDaORWh7fSKR7Q6HWnF48xsqw@mail.gmail.com> > Content-Type: text/plain; charset=ISO-8859-1 > > Gordan, sorry for the typo! > > N. > > > > ------------------------------ > > Message: 9 > Date: Tue, 11 Jun 2013 13:28:50 -0400 > From: Errol Neal <eneal@businessgrade.com> > To: Nick Khamis <symack@gmail.com> > Cc: "xen-users@lists.xensource.com" <xen-users@lists.xensource.com> > Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN > Message-ID: <1370971730454167500@businessgrade.com> > Content-Type: text/plain > > On Tue, 06/11/2013 01:23 PM, Nick Khamis <symack@gmail.com> wrote: >> >> ESOS by all means is not perfect. I''m running an older release because it''s impossible to >> >> upgrade a production system without downtime using ESOS (currently) but I was >> >> impressed with it non the less and i can see where it''s going. >> >> Thanks again Errol. Just our of curiosity was any of this replicated? >> > > That is my next step. I had been planning of using Ininiband, SDP and DRBD, but there are some funky issues there. I just never got around to it. > I think what''s necessary over replication is a dual head configuration. > A combination of RAID1, CLVM, Pacemaker, SCST and shared storage between two nodes should suffice. > > > > ------------------------------ > > Message: 10 > Date: Tue, 11 Jun 2013 13:32:16 -0400 > From: Nick Khamis <symack@gmail.com> > To: eneal@businessgrade.com > Cc: Gordan Bobic <gordan@bobich.net>, "xen-users@lists.xensource.com" > <xen-users@lists.xensource.com> > Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN > Message-ID: > <CAGWRaZZzsxXSRuH+XgfULrVcX7AGiSueA9f9WLzarMgseByNpA@mail.gmail.com> > Content-Type: text/plain; charset=ISO-8859-1 > >> Now that said, if you able to piggy back off the knowledge of others, then >> you get a nice short cut and to be fair, the open source software has >> advanced and matured so much that it''s really production ready for certain >> workloads and environments. >> > > We run our BGP links on Quagga linux boxes on IBM machines and > transmitting an average of 700Mbps with packet sizes upwards of > 900-1000 bytes. I don''t loose sleep over them.... > > N. > > > > ------------------------------ > > Message: 11 > Date: Tue, 11 Jun 2013 19:17:04 +0100 > From: Gordan Bobic <gordan@bobich.net> > To: Nick Khamis <symack@gmail.com> > Cc: xen-users <xen-users@lists.xensource.com> > Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN > Message-ID: <51B769A0.8040301@bobich.net> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > On 06/11/2013 06:27 PM, Nick Khamis wrote: >> On 6/11/13, Gordan Bobic <gordan@bobich.net> wrote: >>> Heavens forbid we should do some research, prototyping and >>> testing before building the whole solution... >>> >>> It ultimately comes down to what your time is worth and >>> how much you are saving. If you are looking to deploy 10 >>> storage boxes at $10K each vs. $50K each, you can spend >>> a year prototyping and testing and still save a fortune. >>> If you only need one, it may or may not be worthwhile >>> depending on your hourly rate. >>> >>> Gordan >>> >> >> And hence the purpose of this thread :). Gordon, you mentioned that >> you did use DRBD >> for separate instances outside of the NAS. I am curious to know of >> your experience with NAS level replication. What you feel would be a >> more stable and scalable fit. > > It largely depends on what exactly do you want to do with it. For a NAS, > I use ZFS + lsyncd for near-synchronous replication (rsync-on-write). > > For a SAN I tend to use ZFS with zvols exported over iSCSI, with period > ZFS send to the backup NAS. If you need real-time replication for > fail-over purposes, I would probably run DRBD on top of ZFS zvols. > > Gordan > > > > ------------------------------ > > Message: 12 > Date: Tue, 11 Jun 2013 19:28:40 +0100 > From: Gordan Bobic <gordan@bobich.net> > To: eneal@businessgrade.com > Cc: "xen-users@lists.xensource.com" <xen-users@lists.xensource.com>, > Nick Khamis <symack@gmail.com> > Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN > Message-ID: <51B76C58.6090303@bobich.net> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > On 06/11/2013 06:28 PM, Errol Neal wrote: >> On Tue, 06/11/2013 01:23 PM, Nick Khamis <symack@gmail.com> wrote: >>>>> ESOS by all means is not perfect. I''m running an older release because it''s impossible to >>>>> upgrade a production system without downtime using ESOS (currently) but I was >>>>> impressed with it non the less and i can see where it''s going. >>> >>> Thanks again Errol. Just our of curiosity was any of this replicated? >>> >> >> That is my next step. I had been planning of using Ininiband, >> SDP and DRBD, but there are some funky issues there. I just >> never got around to it. > > The first thing that jumps out at me here is infiniband. Do you have the > infrastructure and cabling in place to actually do that? This can be > very relevant depending on your environment. If you are planning to get > some cheap kit on eBay to do this, that''s all well and good, but will > you be able to get a replacement if something breaks in a year or three? > One nice thing about ethernet is that it will always be around, it will > always be cheap, and it will always be compatible. > > For most uses multiple gigabit links bonded together are ample. Remember > that you will get, on a good day, about 120 IOPS per disk. Assuming a > typical 4K operation size that''s 480KB/s/disk. At 16KB/op that is still > 1920KB/s/disk. At that rate you''d need 50 disks to saturate a single > gigabit channel. And you can bond a bunch of them together for next to > nothing in switch/NIC costs. > >> I think what''s necessary over replication is a dual head >> configuration. > > Elaborate? > >> A combination of RAID1, CLVM, Pacemaker, SCST and shared storage >> between two nodes should suffice. > > In what configuration? > > Gordan > > > > ------------------------------ > > Message: 13 > Date: Tue, 11 Jun 2013 16:39:39 -0500 > From: ranjith krishnan <ranjithkrishnan1@gmail.com> > To: xen-users@lists.xen.org > Subject: [Xen-users] Xen 4.1 compile from source and install on Fedora > 17 > Message-ID: > <CAEybL6wFUpGJJa_BHumwR_TgVnN63qJ4ZHGF+EmdPF9mcaD7mQ@mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > Hello, > > I am relatively new to Xen and need help compiling and installing Xen from > source. > > Using some tutorials online, I have got Xen working with the ''yum install > xen'' method. > I used virt-manager and was able to get 2 domUs working ( CentOS 5, and > Fedora 16). > My domUs reside on Logical Volumes in an LVM, on a second hard disk sda2, > while my dom0 is installed on sda1. Everything is working fine in this > configuration. > I want to use Xen 4.1 since I want to continue using > virt-install/virt-manager for domU provisioning. > > For my work now, I want to install Xen from source and try to modify some > source code files and test things out. > I have seen some tutorials online, and I am not sure they give the complete > picture. > For ex, > http://wiki.xen.org/wiki/Xen_4.2_Build_From_Source_On_RHEL_CentOS_Fedora > Fedora 17 uses grub 2. When we do a yum install, the grub entries are taken > care of and things just work. > When I install from source, this is not the case. Are there any tutorials > which give a complete picture? > Or if someone has got Xen working from source on Fedora 16, 17 or 18, can > you give me tips on how to edit grub configuration so that xen boots ok. > I have tried and failed once compiling and installing Xen on Fedora 16, > which is when I used yum. > > > -- > Ranjith krishnan > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: <http://lists.xen.org/archives/html/xen-users/attachments/20130611/34655873/attachment.html> > > ------------------------------ > > Message: 14 > Date: Tue, 11 Jun 2013 23:40:04 +0100 > From: Wei Liu <wei.liu2@citrix.com> > To: ranjith krishnan <ranjithkrishnan1@gmail.com> > Cc: xen-users@lists.xen.org, wei.liu2@citrix.com > Subject: Re: [Xen-users] Xen 4.1 compile from source and install on > Fedora 17 > Message-ID: <20130611224004.GA25483@zion.uk.xensource.com> > Content-Type: text/plain; charset="us-ascii" > > Hello, > > I''ve seen your mail to xen-devel as well. Given that you''re still in > configuration phase, my gut feeling is that this is the proper list to > post. When you have questions about Xen code / development workflow you > can ask them on xen-devel. > > On Tue, Jun 11, 2013 at 04:39:39PM -0500, ranjith krishnan wrote: >> Hello, >> >> I am relatively new to Xen and need help compiling and installing Xen from >> source. >> >> Using some tutorials online, I have got Xen working with the ''yum install >> xen'' method. >> I used virt-manager and was able to get 2 domUs working ( CentOS 5, and >> Fedora 16). >> My domUs reside on Logical Volumes in an LVM, on a second hard disk sda2, >> while my dom0 is installed on sda1. Everything is working fine in this >> configuration. >> I want to use Xen 4.1 since I want to continue using >> virt-install/virt-manager for domU provisioning. >> >> For my work now, I want to install Xen from source and try to modify some >> source code files and test things out. >> I have seen some tutorials online, and I am not sure they give the complete >> picture. >> For ex, >> http://wiki.xen.org/wiki/Xen_4.2_Build_From_Source_On_RHEL_CentOS_Fedora >> Fedora 17 uses grub 2. When we do a yum install, the grub entries are taken >> care of and things just work. >> When I install from source, this is not the case. Are there any tutorials >> which give a complete picture? >> Or if someone has got Xen working from source on Fedora 16, 17 or 18, can >> you give me tips on how to edit grub configuration so that xen boots ok. >> I have tried and failed once compiling and installing Xen on Fedora 16, >> which is when I used yum. > > For the grub entry, the simplest method is to place your binary under > /boot and invoke update-grub2 (which is also invoked when you do ''yum > install'' if I''m not mistaken). In theory it should do the right thing. > > Another method to solve your problem is to modify grub.conf yourself. > Just copy the entry that ''yum install'' adds in grub.conf, replace the > binary file name with the one you compile and you''re all set. > > You might also find this page useful if you''re to develop Xen. > http://wiki.xen.org/wiki/Xen_Serial_Console > (it also contains sample entries for legacy grub and grub2, nice ;-) ) > > > Wei. > >> >> >> -- >> Ranjith krishnan > >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xen.org >> http://lists.xen.org/xen-users > > > > > ------------------------------ > > Message: 15 > Date: Tue, 11 Jun 2013 19:01:33 -0600 > From: jacek burghardt <jaceksburghardt@gmail.com> > To: xen-users <xen-users@lists.xen.org> > Subject: [Xen-users] pv assign pci device > Message-ID: > <CAHyyzzQ53ZHYExKQ15TQSMdaXuN6t7_+wuJnFMFywvwJDYrBGA@mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > I have xeon quad core server I wonder if is possible to assign pci usb > device to pv if the server does not suport iommu vd-t > I had blacklisted usb modules and hid devices and devices are listed as > assignable > but when I add them to pv I get this error libxl: error: libxl: error: > libxl_pci.c:989:libxl__device_pci_reset: The kernel doesn''t support reset > from sysfs for PCI device 0000:00:1d.0 > libxl: error: libxl_pci.c:989:libxl__device_pci_reset: The kernel doesn''t > support reset from sysfs for PCI device 0000:00:1d.1 > Daemon running with PID 897 > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: <http://lists.xen.org/archives/html/xen-users/attachments/20130611/6e5ccfba/attachment.html> > > ------------------------------ > > Message: 16 > Date: Wed, 12 Jun 2013 07:13:15 +0100 > From: Gordan Bobic <gordan@bobich.net> > To: jacek burghardt <jaceksburghardt@gmail.com> > Cc: xen-users <xen-users@lists.xen.org> > Subject: Re: [Xen-users] pv assign pci device > Message-ID: <51B8117B.3020404@bobich.net> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > On 06/12/2013 02:01 AM, jacek burghardt wrote: >> I have xeon quad core server I wonder if is possible to assign pci usb >> device to pv if the server does not suport iommu vd-t >> I had blacklisted usb modules and hid devices and devices are listed as >> assignable >> but when I add them to pv I get this error libxl: error: libxl: error: >> libxl_pci.c:989:libxl__device_pci_reset: The kernel doesn''t support >> reset from sysfs for PCI device 0000:00:1d.0 >> libxl: error: libxl_pci.c:989:libxl__device_pci_reset: The kernel >> doesn''t support reset from sysfs for PCI device 0000:00:1d.1 >> Daemon running with PID 897 > > I don''t think that is a fatal error. I get that on, for example, the VGA > card passed through to the VM, but it still works inside the domU. It > just means the device doesn''t support FLR. > > Gordan > > > > > ------------------------------ > > Message: 17 > Date: Wed, 12 Jun 2013 00:30:06 +0200 > From: Dario Faggioli <dario.faggioli@citrix.com> > To: Ian Campbell <Ian.Campbell@citrix.com> > Cc: xen-users@lists.xen.org, Russ Pavlicek > <russell.pavlicek@xenproject.org> > Subject: Re: [Xen-users] Blog: Installing the Xen hypervisor on Fedora > 19 > Message-ID: <1370989806.20028.51.camel@Solace> > Content-Type: text/plain; charset="utf-8" > > On gio, 2013-06-06 at 09:52 +0100, Ian Campbell wrote: >> On Wed, 2013-06-05 at 22:11 -0400, Russ Pavlicek wrote: >> > Saw this post from Major Hayden of Rackspace: >> > >> > http://major.io/2013/06/02/installing-the-xen-hypervisor-on-fedora-19/ >> >> It''d be good to get this linked from >> http://wiki.xen.org/wiki/Category:Fedora >> > Well, although I''m very happy about blog posts like these starting to > come up spontaneously all around the place, allow me to say tat we have > the Fedora host install page on the Wiki > (http://wiki.xen.org/wiki/Fedora_Host_Installation) that contains > exactly the same information (it actually has much more info, and it is > of course part of the Fedora wiki category!) > > That being said, I guess I can add a section there (in the Fedora > Category page) about ''external'' pages, posts, etc... Let me think how > and where to put it... > > Thanks and Regards, > Dario > > -- > <<This happens because I choose it to happen!>> (Raistlin Majere) > ----------------------------------------------------------------- > Dario Faggioli, Ph.D, http://about.me/dario.faggioli > Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) > > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: signature.asc > Type: application/pgp-signature > Size: 198 bytes > Desc: This is a digitally signed message part > URL: <http://lists.xen.org/archives/html/xen-users/attachments/20130612/67c37d4d/attachment.pgp> > > ------------------------------ > > Message: 18 > Date: Wed, 12 Jun 2013 09:01:56 +0200 > From: Dario Faggioli <dario.faggioli@citrix.com> > To: xen-devel@lists.xen.org > Cc: xen-users@lists.xen.org, xen-api@lists.xen.org > Subject: [Xen-users] Xen Test Day is today! > Message-ID: <1371020516.9946.5.camel@Abyss> > Content-Type: text/plain; charset="utf-8" > > Hi everybody, > > Allow me to remind you that the 4th Xen Test Day is happening today, so > come and join us on #xentest on freenode! > > We will be testing Xen 4.3 RC4, released yesterday and, probably, *the* > *last* release candidate! For more info, see: > > - on Xen Test Days: > http://wiki.xen.org/wiki/Xen_Test_Days > > - on getting and testing RC4: > http://wiki.xen.org/wiki/Xen_4.3_RC4_test_instructions > > - for generic testing information: > http://wiki.xen.org/wiki/Testing_Xen > > See you all on freenode, channel #xentest. > > Regards > Dario > > -- > <<This happens because I choose it to happen!>> (Raistlin Majere) > ----------------------------------------------------------------- > Dario Faggioli, Ph.D, http://about.me/dario.faggioli > Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) > > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: signature.asc > Type: application/pgp-signature > Size: 198 bytes > Desc: This is a digitally signed message part > URL: <http://lists.xen.org/archives/html/xen-users/attachments/20130612/2fcb2e25/attachment.pgp> > > ------------------------------ > > Message: 19 > Date: Wed, 12 Jun 2013 09:44:02 +0200 > From: Fabio Fantoni <fabio.fantoni@m2r.biz> > To: Dario Faggioli <dario.faggioli@citrix.com> > Cc: xen-users@lists.xen.org, xen-api@lists.xen.org, > xen-devel@lists.xen.org > Subject: Re: [Xen-users] [Xen-devel] Xen Test Day is today! > Message-ID: <51B826C2.3030706@m2r.biz> > Content-Type: text/plain; charset="iso-8859-1"; Format="flowed" > > Il 12/06/2013 09:01, Dario Faggioli ha scritto: >> Hi everybody, >> >> Allow me to remind you that the 4th Xen Test Day is happening today, so >> come and join us on #xentest on freenode! >> >> We will be testing Xen 4.3 RC4, released yesterday and, probably, *the* >> *last* release candidate! For more info, see: >> >> - on Xen Test Days: >> http://wiki.xen.org/wiki/Xen_Test_Days >> >> - on getting and testing RC4: >> http://wiki.xen.org/wiki/Xen_4.3_RC4_test_instructions >> >> - for generic testing information: >> http://wiki.xen.org/wiki/Testing_Xen >> >> See you all on freenode, channel #xentest. >> >> Regards >> Dario >> >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xen.org >> http://lists.xen.org/xen-devel > I saw that qemu upstrem tag is not updated (on Config.mk > QEMU_UPSTREAM_REVISION ?= qemu-xen-4.3.0-rc1) but on git there are new > patches, why? > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: <http://lists.xen.org/archives/html/xen-users/attachments/20130612/d02fbfa4/attachment.html> > > ------------------------------ > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users > > > End of Xen-users Digest, Vol 100, Issue 17 > ******************************************
>> Infiniband is now part of the linux kernel, compare to few years ago.. >> and used hardware is not that expensive.. not much different from Fiber Channel. >> 56Gig (aka FDR) is also available, albeit more expensive.. >> Imho, Infiniband is going to become more relevant and universal in the >> upcoming years..>> Cheers, >> Anastas S >> sysadmin++Hello Anastas, Thank you so much for your response, it was very informative. One can wage their bets on fiber as being the transport layer of the futures, while others can see ethernet going to 100 really soon. Depends who you talk to I guess. N.
Some people have been asking about ZFS configurations used for Gluster bricks.. Well.. it''s quire simple. On a 36 drive machine, we chose to configure it with 3 bricks @ 12 drives per brick: - every brick consists of 12 drives - 10 drives are are used for RAIDZ1 (2TB 3.5" WD Enterprise Black HDD) - 1 drive is used as cache (64GB 2.5" SSD with AdaptaDrive bracket for great fit) - 1 drive is used as a spare (2TB 3.5" WD Enterprise Black HDD) Here is a ZPOOL status output of one of the bricks: asemenov@lakshmi:~$ sudo zpool status brick0 pool: brick0 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM brick0 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 slot0 ONLINE 0 0 0 slot1 ONLINE 0 0 0 slot2 ONLINE 0 0 0 slot3 ONLINE 0 0 0 slot4 ONLINE 0 0 0 slot5 ONLINE 0 0 0 slot6 ONLINE 0 0 0 slot7 ONLINE 0 0 0 slot8 ONLINE 0 0 0 slot9 ONLINE 0 0 0 cache slot10 ONLINE 0 0 0 spares slot11 AVAIL errors: No known data errors To lower the entry $$, we have only 2 bricks per Gluster node populated ([1] and [2]), but in the end 4 Gluster nodes will look like this: {[1][3][4]} {[1][2][4]} {[1][2][3]} {[2][3][4]} legend: { } - is gluster node [ ] - is gluster brick 1-4 - is a replica/mirror id Expanding this setup is possible with SAS attached expanders (with disk ## multiples of x 12), or better yet, adding 4 identical nodes, for better Gluster performance and increased throughput and better IB fabric utilization. Hope this inspires you and helps in your projects. Cheers, Anastas S sysadmin++>> Hello everyone, >> >> At my current workplace, we''ve been evaluating solutions from DDN vs. >> NetApp vs. in-house. >> The requirement was to have a low entry price for at least 1/3 PT >> storage, as a starting point, high IO/bandwidth, low latency and >> Hadoop compatibility, and target capacity of 1PT with further >> The DDN and NetApp solutions were all $300k+ with limited flexibility, >> overpriced replacement drives and limited expandability options. >> After evaluating our own solution on old hardware we had lying around, >> we''ve decided to give it a shot. >> There was obviously some risks, convincing management to sign the PO >> for $25k and explaining the risks and benefits, with a worst case >> scenario - using it as more traditional storage nodes. >> >> We''ve purchased 4 x 3U SuperMicro chassis with 36 x 3.5 HDDs and >> additional internal slots for OS drives. Along with few used $150 >> Infiniband 40Gig cards and IB switch (most expensive single piece of >> equipment here ~ $5-7k). >> >> The resulted 4 node GlusterFS cluster running over RDMA transport, ZFS >> bricks (10 HDD in raidz + 1 SSD cache + 1 spare), with 200 nano-second >> fabric latency, highly configurable replication (we use 3x) and >> flexible expandability. >> In out tests so far with this system, we''ve seen 18GB/sec fabric >> bandwidth, reading from all 3 replicas (which is what Gluster does >> when you replicate - it spreads IO) at 6GB/sec per replica. >> 6GB per second is pretty much the most you can squeeze out of 40GB >> Infiniband (aka QDR), but that was a sequential read test. However, by >> increasing number of Gluster nodes and bricks, you can achieve greater >> throughput for >> I suppose, you could do DRBD over RDMA (SDP or SuperSockets as per >> DRBD Docs: http://www.drbd.org/users-guide/s-replication-transports.html) >> if your environment requires it, over Gluster.. >> >> Infiniband is now part of the linux kernel, compare to few years ago.. >> and used hardware is not that expensive.. not much different from Fiber Channel. >> 56Gig (aka FDR) is also available, albeit more expensive.. >> Imho, Infiniband is going to become more relevant and universal in the >> upcoming years.. >> >> Cheers, >> Anastas S >> sysadmin++ >> >> >> On Wed, Jun 12, 2013 at 12:00 PM, <xen-users-request@lists.xen.org> wrote: >>> Send Xen-users mailing list submissions to >>> xen-users@lists.xen.org >>> >>> To subscribe or unsubscribe via the World Wide Web, visit >>> http://lists.xen.org/cgi-bin/mailman/listinfo/xen-users >>> or, via email, send a message with subject or body ''help'' to >>> xen-users-request@lists.xen.org >>> >>> You can reach the person managing the list at >>> xen-users-owner@lists.xen.org >>> >>> When replying, please edit your Subject line so it is more specific >>> than "Re: Contents of Xen-users digest..." >>> >>> >>> Today''s Topics: >>> >>> 1. Re: Linux Fiber or iSCSI SAN (Gordan Bobic) >>> 2. Re: Linux Fiber or iSCSI SAN (Nick Khamis) >>> 3. Re: Linux Fiber or iSCSI SAN (Gordan Bobic) >>> 4. Re: Linux Fiber or iSCSI SAN (Errol Neal) >>> 5. Re: Linux Fiber or iSCSI SAN (Nick Khamis) >>> 6. Re: Linux Fiber or iSCSI SAN (Nick Khamis) >>> 7. Re: Linux Fiber or iSCSI SAN (Errol Neal) >>> 8. Re: Linux Fiber or iSCSI SAN (Nick Khamis) >>> 9. Re: Linux Fiber or iSCSI SAN (Errol Neal) >>> 10. Re: Linux Fiber or iSCSI SAN (Nick Khamis) >>> 11. Re: Linux Fiber or iSCSI SAN (Gordan Bobic) >>> 12. Re: Linux Fiber or iSCSI SAN (Gordan Bobic) >>> 13. Xen 4.1 compile from source and install on Fedora 17 >>> (ranjith krishnan) >>> 14. Re: Xen 4.1 compile from source and install on Fedora 17 (Wei Liu) >>> 15. pv assign pci device (jacek burghardt) >>> 16. Re: pv assign pci device (Gordan Bobic) >>> 17. Re: Blog: Installing the Xen hypervisor on Fedora 19 >>> (Dario Faggioli) >>> 18. Xen Test Day is today! (Dario Faggioli) >>> 19. Re: [Xen-devel] Xen Test Day is today! (Fabio Fantoni) >>> >>> >>> ---------------------------------------------------------------------- >>> >>> Message: 1 >>> Date: Tue, 11 Jun 2013 17:52:02 +0100 >>> From: Gordan Bobic <gordan@bobich.net> >>> To: Nick Khamis <symack@gmail.com> >>> Cc: xen-users <xen-users@lists.xensource.com> >>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN >>> Message-ID: >>> <1bdcfbd8f2994ee32483e1646fcbe5ec@mail.shatteredsilicon.net> >>> Content-Type: text/plain; charset=UTF-8; format=flowed >>> >>> On Tue, 11 Jun 2013 12:29:22 -0400, Nick Khamis <symack@gmail.com> >>> wrote: >>>> On Tue, Jun 11, 2013 at 11:30 AM, Nick Khamis wrote: >>>> >>>> Hello Everyone, >>>> >>>> I am speaking for everyone when saying that we are really interested >>>> in knowing what people are >>>> using in deployment. This would be active/active replicated, block >>>> level storage solutions at the: >>>> >>>> NAS Level: FreeNAS, OpenFiler (I know it''s not linux), IET >>>> FS Level: ZFS, OCFS/2, GFS/2, GlusterFS >>>> Replication Level: DRBD vs GlusterFS >>>> Cluster Level: OpenAIS with Pacemaker etc... >>>> >>>> Our hope is for an educated breakdown (i.e., comparisons, benefits, >>>> limitation) of different setups, as opposed to >>>> a war of words on which NAS solution is better than the other. >>>> Comparing black boxes would also be interesting >>>> at a performance level. Talk about pricing, not so much since we >>>> already know that they cost and arm and a leg. >>>> >>>> Kind Regards, >>>> >>>> Nick. >>>> >>>> There was actually one more level I left out >>>> >>>> Hardware Level: PCIe bus (8x 16x V2 etc..), Interface cards (FC and >>>> RJ), SAS (Seagate vs WD) >>>> >>>> I hope this thread takes off, and individuals interested in the same >>>> topic can get some really valuable info. >>>> >>>> On a side note, and interesting comment I received was on the risks >>>> that are associated with such a custom build, as >>>> well as the lack of flexibility in some sense. >>> >>> The risk issue I might entertain to some extent (although >>> personally I think the risk is LOWER if you built the system >>> yourself and you have it adequately mirrored and backed up - if >>> something goes wrong you actually understand how it all hangs >>> together and can fix it yourself quickly, as opposed to hours >>> of downtime while an engineer on the other end of the phone >>> tries to guess what is actually wrong). >>> >>> But the flexibility argument is completely bogus. If you are >>> building the solution yourself you have the flexibility to do >>> whatever you want. When you buy and off the shelf >>> all-in-one-black-box appliance you are straitjacketed by >>> whatever somebody else decided might be useful without any >>> specific insight into your particular use case. >>> >>> Gordan >>> >>> >>> >>> ------------------------------ >>> >>> Message: 2 >>> Date: Tue, 11 Jun 2013 13:03:12 -0400 >>> From: Nick Khamis <symack@gmail.com> >>> To: Gordan Bobic <gordan@bobich.net> >>> Cc: xen-users <xen-users@lists.xensource.com> >>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN >>> Message-ID: >>> <CAGWRaZZga+SuBc4iV0FO=D=HLthY=DNNJ-fuDEa1re8DQygZZA@mail.gmail.com> >>> Content-Type: text/plain; charset="iso-8859-1" >>> >>> On Tue, Jun 11, 2013 at 12:52 PM, Gordan Bobic <gordan@bobich.net> wrote: >>> >>>> >>>> The risk issue I might entertain to some extent (although >>>> personally I think the risk is LOWER if you built the system >>>> yourself and you have it adequately mirrored and backed up - if >>>> something goes wrong you actually understand how it all hangs >>>> together and can fix it yourself quickly, as opposed to hours >>>> of downtime while an engineer on the other end of the phone >>>> tries to guess what is actually wrong). >>> >>> Very True!! >>> >>> But apples vs apples. It comes down to the warranty on your >>> iscsi raid controller, cpu etc.. vs. whatever guts are in the >>> powervault. And I agree with both trains of thoughts... >>> Warranty through adaptec or Dell, in either case there >>> will be downtime. >>> >>> >>>> >>>> But the flexibility argument is completely bogus. If you are >>>> building the solution yourself you have the flexibility to do >>>> whatever you want. When you buy and off the shelf >>>> all-in-one-black-box appliance you are straitjacketed by >>>> whatever somebody else decided might be useful without any >>>> specific insight into your particular use case. >>>> >>>> Gordan >>> >>> For sure... The inflexibility I was referring to are instance where >>> one starts out an endeavour to build a replicated NAS, and finds >>> out the hard way regarding size limitations of DRBD, lack of >>> clustering capabilities of FreeNAS, or instability issues of OpenFiler >>> with large instances. >>> >>> There is also SCSI-3 persistent reservations issues which is needed >>> by some of the virtualization systems that may of may not be supported >>> by FreeNAS (last I checked)... >>> >>> N. >>> >>> N. >>> -------------- next part -------------- >>> An HTML attachment was scrubbed... >>> URL: <http://lists.xen.org/archives/html/xen-users/attachments/20130611/3e06eae9/attachment.html> >>> >>> ------------------------------ >>> >>> Message: 3 >>> Date: Tue, 11 Jun 2013 18:13:58 +0100 >>> From: Gordan Bobic <gordan@bobich.net> >>> To: Nick Khamis <symack@gmail.com> >>> Cc: xen-users <xen-users@lists.xensource.com> >>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN >>> Message-ID: >>> <7d0db81b985d5c4e76781d94626b0cd9@mail.shatteredsilicon.net> >>> Content-Type: text/plain; charset=UTF-8; format=flowed >>> >>> On Tue, 11 Jun 2013 13:03:12 -0400, Nick Khamis <symack@gmail.com> >>> wrote: >>>> On Tue, Jun 11, 2013 at 12:52 PM, Gordan Bobic wrote: >>>> >>>> The risk issue I might entertain to some extent (although >>>> personally I think the risk is LOWER if you built the system >>>> yourself and you have it adequately mirrored and backed up - if >>>> something goes wrong you actually understand how it all hangs >>>> together and can fix it yourself quickly, as opposed to hours >>>> of downtime while an engineer on the other end of the phone >>>> tries to guess what is actually wrong). >>>> >>>> Very True!! >>>> >>>> But apples vs apples. It comes down to the warranty on your >>>> iscsi raid controller, cpu etc.. vs. whatever guts are in the >>>> powervault. And I agree with both trains of thoughts... >>>> Warranty through adaptec or Dell, in either case there >>>> will be downtime. >>> >>> If you build it yourself you will save enough money that you can >>> have 5 of everything sitting on the shelf for spares. And it''ll >>> all still be covered by a warranty. >>> >>>> But the flexibility argument is completely bogus. If you are >>>> building the solution yourself you have the flexibility to do >>>> whatever you want. When you buy and off the shelf >>>> all-in-one-black-box ?appliance you are straitjacketed by >>>> whatever somebody else decided might be useful without any >>>> specific insight into your particular use case. >>>> >>>> For sure... The inflexibility I was referring to are instance where >>>> one starts out an endeavour to build a replicated NAS, and finds >>>> out the hard way regarding size limitations of DRBD, lack of >>>> clustering capabilities of FreeNAS, or instability issues of >>>> OpenFiler with large instances. >>> >>> Heavens forbid we should do some research, prototyping and >>> testing before building the whole solution... >>> >>> It ultimately comes down to what your time is worth and >>> how much you are saving. If you are looking to deploy 10 >>> storage boxes at $10K each vs. $50K each, you can spend >>> a year prototyping and testing and still save a fortune. >>> If you only need one, it may or may not be worthwhile >>> depending on your hourly rate. >>> >>> Gordan >>> >>> >>> >>> ------------------------------ >>> >>> Message: 4 >>> Date: Tue, 11 Jun 2013 13:17:10 -0400 >>> From: Errol Neal <eneal@businessgrade.com> >>> To: Nick Khamis <symack@gmail.com> >>> Cc: "xen-users@lists.xensource.com" <xen-users@lists.xensource.com> >>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN >>> Message-ID: <1370971030353245500@businessgrade.com> >>> Content-Type: text/plain >>> >>> >>>>> I''ve built a number of white box SANs using everything from OpenSolaris >>>>> and COMSTAR, Open-E, OpenFiler, SCST, IET... etc.using iSCSI and FC. >>>>> I''ve settled Ubuntu boxes booted via DRBD running SCST OR ESOS. >>>>> From a performance perspective, I have pretty large customer that two XCP >>>>> pools running off a Dell MD3200F using 4GB FC. To compare, I took a Dell >>>>> 2970 or something like that, stuck 8 Seatgate 2.5" Constellation Drives in >>>>> it, a 4GB HBA and installed ESOS on it. >>>>> I never got around to finishing my testing, but the ESOS box can >>>>> definitely keep up and things like LSI cachecade would really help to bring >>>>> it to a more enterprise-level performance with respect to random reads and >>>>> writes. >>>>> Lastly, there is such an abundance of DIRT CHEAP, lightly used 4GB FC >>>>> equipment on the market today that I find it interesting that people still >>>>> prefer iSCSI. iSCSI is good if you have 10GBE which is still far to >>>>> expensive per port IMO. However, you can get 2 - 4 port, 4GB FC Hbas on >>>>> ebay for under 100 bucks and I generally am able to purchase fully loaded >>>>> switches (brocade 200e) for somewhere in the neighborhood of 300 bucks each! >>>>> MPIO with 2 FC ports from an initiator to a decent target can easily >>>>> saturate the link on basic sequential r/w write tests. Not to mention, >>>>> improved latency, access times, etc for random i/o. >>>> >>>> Hello Eneal, >>>> >>>> Thank you so much for your response. Did you experience any problems with >>>> ESOS and your FS SAN in terms of stability. >>>> We already have our myrinet FC cards and switches, and I agree, it was dirt >>>> cheap. >>> >>> ESOS by all means is not perfect. I''m running an older release because it''s impossible to upgrade a production system without downtime using ESOS (currently) but I was impressed with it non the less and i can see where it''s going. >>> I think what has worked better for me is using SCST on Ubuntu. As long as your hardware is stable, you should have no issues. >>> At another site, I have two boxes in production (running iSCSI at this site) and I''ve had zero non-hardware-related issues and I''ve been running them in prod for 1 - 2 years. >>> >>> >>> >>> >>> ------------------------------ >>> >>> Message: 5 >>> Date: Tue, 11 Jun 2013 13:23:05 -0400 >>> From: Nick Khamis <symack@gmail.com> >>> To: eneal@businessgrade.com >>> Cc: "xen-users@lists.xensource.com" <xen-users@lists.xensource.com> >>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN >>> Message-ID: >>> <CAGWRaZbBjH_bMS-Zgd-qN8f5b8zey2ng-ZZaGZ8QUkoaiKZ+XQ@mail.gmail.com> >>> Content-Type: text/plain; charset=ISO-8859-1 >>> >>>>> ESOS by all means is not perfect. I''m running an older release because it''s impossible to >>>>> upgrade a production system without downtime using ESOS (currently) but I was >>>>> impressed with it non the less and i can see where it''s going. >>> >>> Thanks again Errol. Just our of curiosity was any of this replicated? >>> >>> N. >>> >>> >>> >>> ------------------------------ >>> >>> Message: 6 >>> Date: Tue, 11 Jun 2013 13:27:24 -0400 >>> From: Nick Khamis <symack@gmail.com> >>> To: Gordan Bobic <gordan@bobich.net> >>> Cc: xen-users <xen-users@lists.xensource.com> >>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN >>> Message-ID: >>> <CAGWRaZbY4uqZaq5b-CWam27vG_3K=qQnZBOcM5F_7UV3jya_qw@mail.gmail.com> >>> Content-Type: text/plain; charset=ISO-8859-1 >>> >>> On 6/11/13, Gordan Bobic <gordan@bobich.net> wrote: >>>> Heavens forbid we should do some research, prototyping and >>>> testing before building the whole solution... >>>> >>>> It ultimately comes down to what your time is worth and >>>> how much you are saving. If you are looking to deploy 10 >>>> storage boxes at $10K each vs. $50K each, you can spend >>>> a year prototyping and testing and still save a fortune. >>>> If you only need one, it may or may not be worthwhile >>>> depending on your hourly rate. >>>> >>>> Gordan >>> >>> And hence the purpose of this thread :). Gordon, you mentioned that >>> you did use DRBD >>> for separate instances outside of the NAS. I am curious to know of >>> your experience with NAS level replication. What you feel would be a >>> more stable and scalable fit. >>> >>> N. >>> >>> >>> >>> ------------------------------ >>> >>> Message: 7 >>> Date: Tue, 11 Jun 2013 13:26:00 -0400 >>> From: Errol Neal <eneal@businessgrade.com> >>> To: Gordan Bobic <gordan@bobich.net> >>> Cc: "xen-users@lists.xensource.com" <xen-users@lists.xensource.com>, >>> Nick Khamis <symack@gmail.com> >>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN >>> Message-ID: <1370971560963286500@businessgrade.com> >>> Content-Type: text/plain >>> >>> On Tue, 06/11/2013 01:13 PM, Gordan Bobic <gordan@bobich.net> wrote: >>>> >>>> Heavens forbid we should do some research, prototyping and >>>> testing before building the whole solution... >>>> >>>> It ultimately comes down to what your time is worth and >>>> how much you are saving. If you are looking to deploy 10 >>>> storage boxes at $10K each vs. $50K each, you can spend >>>> a year prototyping and testing and still save a fortune. >>>> If you only need one, it may or may not be worthwhile >>>> depending on your hourly rate. >>> >>> This is a really key point. I don''t like to toot my own horn, but I''ve done EXTENSIVE and EXHAUSTIVE research into this. I built my first Open-E iSCSI box in like 2006. The right combination of hard disk, hdd firmware, raid controller, controller firmware, motherboard, memory, cpu, nics, hbas.. everything is critical and by the time you narrow all of this down and test sufficiently and are ready to go into production, you''ve spent a significant amount of time and money. >>> Now that said, if you able to piggy back off the knowledge of others, then you get a nice short cut and to be fair, the open source software has advanced and matured so much that it''s really production ready for certain workloads and environments. >>> >>> >>> >>> ------------------------------ >>> >>> Message: 8 >>> Date: Tue, 11 Jun 2013 13:27:55 -0400 >>> From: Nick Khamis <symack@gmail.com> >>> To: Gordan Bobic <gordan@bobich.net> >>> Cc: xen-users <xen-users@lists.xensource.com> >>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN >>> Message-ID: >>> <CAGWRaZYxn6y5D-q3HnTo-H92NyDaORWh7fSKR7Q6HWnF48xsqw@mail.gmail.com> >>> Content-Type: text/plain; charset=ISO-8859-1 >>> >>> Gordan, sorry for the typo! >>> >>> N. >>> >>> >>> >>> ------------------------------ >>> >>> Message: 9 >>> Date: Tue, 11 Jun 2013 13:28:50 -0400 >>> From: Errol Neal <eneal@businessgrade.com> >>> To: Nick Khamis <symack@gmail.com> >>> Cc: "xen-users@lists.xensource.com" <xen-users@lists.xensource.com> >>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN >>> Message-ID: <1370971730454167500@businessgrade.com> >>> Content-Type: text/plain >>> >>> On Tue, 06/11/2013 01:23 PM, Nick Khamis <symack@gmail.com> wrote: >>>>>> ESOS by all means is not perfect. I''m running an older release because it''s impossible to >>>>>> upgrade a production system without downtime using ESOS (currently) but I was >>>>>> impressed with it non the less and i can see where it''s going. >>>> >>>> Thanks again Errol. Just our of curiosity was any of this replicated? >>> >>> That is my next step. I had been planning of using Ininiband, SDP and DRBD, but there are some funky issues there. I just never got around to it. >>> I think what''s necessary over replication is a dual head configuration. >>> A combination of RAID1, CLVM, Pacemaker, SCST and shared storage between two nodes should suffice. >>> >>> >>> >>> ------------------------------ >>> >>> Message: 10 >>> Date: Tue, 11 Jun 2013 13:32:16 -0400 >>> From: Nick Khamis <symack@gmail.com> >>> To: eneal@businessgrade.com >>> Cc: Gordan Bobic <gordan@bobich.net>, "xen-users@lists.xensource.com" >>> <xen-users@lists.xensource.com> >>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN >>> Message-ID: >>> <CAGWRaZZzsxXSRuH+XgfULrVcX7AGiSueA9f9WLzarMgseByNpA@mail.gmail.com> >>> Content-Type: text/plain; charset=ISO-8859-1 >>> >>>> Now that said, if you able to piggy back off the knowledge of others, then >>>> you get a nice short cut and to be fair, the open source software has >>>> advanced and matured so much that it''s really production ready for certain >>>> workloads and environments. >>> >>> We run our BGP links on Quagga linux boxes on IBM machines and >>> transmitting an average of 700Mbps with packet sizes upwards of >>> 900-1000 bytes. I don''t loose sleep over them.... >>> >>> N. >>> >>> >>> >>> ------------------------------ >>> >>> Message: 11 >>> Date: Tue, 11 Jun 2013 19:17:04 +0100 >>> From: Gordan Bobic <gordan@bobich.net> >>> To: Nick Khamis <symack@gmail.com> >>> Cc: xen-users <xen-users@lists.xensource.com> >>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN >>> Message-ID: <51B769A0.8040301@bobich.net> >>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >>> >>> On 06/11/2013 06:27 PM, Nick Khamis wrote: >>>> On 6/11/13, Gordan Bobic <gordan@bobich.net> wrote: >>>>> Heavens forbid we should do some research, prototyping and >>>>> testing before building the whole solution... >>>>> >>>>> It ultimately comes down to what your time is worth and >>>>> how much you are saving. If you are looking to deploy 10 >>>>> storage boxes at $10K each vs. $50K each, you can spend >>>>> a year prototyping and testing and still save a fortune. >>>>> If you only need one, it may or may not be worthwhile >>>>> depending on your hourly rate. >>>>> >>>>> Gordan >>>> >>>> And hence the purpose of this thread :). Gordon, you mentioned that >>>> you did use DRBD >>>> for separate instances outside of the NAS. I am curious to know of >>>> your experience with NAS level replication. What you feel would be a >>>> more stable and scalable fit. >>> >>> It largely depends on what exactly do you want to do with it. For a NAS, >>> I use ZFS + lsyncd for near-synchronous replication (rsync-on-write). >>> >>> For a SAN I tend to use ZFS with zvols exported over iSCSI, with period >>> ZFS send to the backup NAS. If you need real-time replication for >>> fail-over purposes, I would probably run DRBD on top of ZFS zvols. >>> >>> Gordan >>> >>> >>> >>> ------------------------------ >>> >>> Message: 12 >>> Date: Tue, 11 Jun 2013 19:28:40 +0100 >>> From: Gordan Bobic <gordan@bobich.net> >>> To: eneal@businessgrade.com >>> Cc: "xen-users@lists.xensource.com" <xen-users@lists.xensource.com>, >>> Nick Khamis <symack@gmail.com> >>> Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN >>> Message-ID: <51B76C58.6090303@bobich.net> >>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >>> >>> On 06/11/2013 06:28 PM, Errol Neal wrote: >>>> On Tue, 06/11/2013 01:23 PM, Nick Khamis <symack@gmail.com> wrote: >>>>>>> ESOS by all means is not perfect. I''m running an older release because it''s impossible to >>>>>>> upgrade a production system without downtime using ESOS (currently) but I was >>>>>>> impressed with it non the less and i can see where it''s going. >>>>> >>>>> Thanks again Errol. Just our of curiosity was any of this replicated? >>>> >>>> That is my next step. I had been planning of using Ininiband, >>>> SDP and DRBD, but there are some funky issues there. I just >>>> never got around to it. >>> >>> The first thing that jumps out at me here is infiniband. Do you have the >>> infrastructure and cabling in place to actually do that? This can be >>> very relevant depending on your environment. If you are planning to get >>> some cheap kit on eBay to do this, that''s all well and good, but will >>> you be able to get a replacement if something breaks in a year or three? >>> One nice thing about ethernet is that it will always be around, it will >>> always be cheap, and it will always be compatible. >>> >>> For most uses multiple gigabit links bonded together are ample. Remember >>> that you will get, on a good day, about 120 IOPS per disk. Assuming a >>> typical 4K operation size that''s 480KB/s/disk. At 16KB/op that is still >>> 1920KB/s/disk. At that rate you''d need 50 disks to saturate a single >>> gigabit channel. And you can bond a bunch of them together for next to >>> nothing in switch/NIC costs. >>> >>>> I think what''s necessary over replication is a dual head >>>> configuration. >>> >>> Elaborate? >>> >>>> A combination of RAID1, CLVM, Pacemaker, SCST and shared storage >>>> between two nodes should suffice. >>> >>> In what configuration? >>> >>> Gordan >>> >>> >>> >>> ------------------------------ >>> >>> Message: 13 >>> Date: Tue, 11 Jun 2013 16:39:39 -0500 >>> From: ranjith krishnan <ranjithkrishnan1@gmail.com> >>> To: xen-users@lists.xen.org >>> Subject: [Xen-users] Xen 4.1 compile from source and install on Fedora >>> 17 >>> Message-ID: >>> <CAEybL6wFUpGJJa_BHumwR_TgVnN63qJ4ZHGF+EmdPF9mcaD7mQ@mail.gmail.com> >>> Content-Type: text/plain; charset="iso-8859-1" >>> >>> Hello, >>> >>> I am relatively new to Xen and need help compiling and installing Xen from >>> source. >>> >>> Using some tutorials online, I have got Xen working with the ''yum install >>> xen'' method. >>> I used virt-manager and was able to get 2 domUs working ( CentOS 5, and >>> Fedora 16). >>> My domUs reside on Logical Volumes in an LVM, on a second hard disk sda2, >>> while my dom0 is installed on sda1. Everything is working fine in this >>> configuration. >>> I want to use Xen 4.1 since I want to continue using >>> virt-install/virt-manager for domU provisioning. >>> >>> For my work now, I want to install Xen from source and try to modify some >>> source code files and test things out. >>> I have seen some tutorials online, and I am not sure they give the complete >>> picture. >>> For ex, >>> http://wiki.xen.org/wiki/Xen_4.2_Build_From_Source_On_RHEL_CentOS_Fedora >>> Fedora 17 uses grub 2. When we do a yum install, the grub entries are taken >>> care of and things just work. >>> When I install from source, this is not the case. Are there any tutorials >>> which give a complete picture? >>> Or if someone has got Xen working from source on Fedora 16, 17 or 18, can >>> you give me tips on how to edit grub configuration so that xen boots ok. >>> I have tried and failed once compiling and installing Xen on Fedora 16, >>> which is when I used yum. >>> >>> >>> -- >>> Ranjith krishnan >>> -------------- next part -------------- >>> An HTML attachment was scrubbed... >>> URL: <http://lists.xen.org/archives/html/xen-users/attachments/20130611/34655873/attachment.html> >>> >>> ------------------------------ >>> >>> Message: 14 >>> Date: Tue, 11 Jun 2013 23:40:04 +0100 >>> From: Wei Liu <wei.liu2@citrix.com> >>> To: ranjith krishnan <ranjithkrishnan1@gmail.com> >>> Cc: xen-users@lists.xen.org, wei.liu2@citrix.com >>> Subject: Re: [Xen-users] Xen 4.1 compile from source and install on >>> Fedora 17 >>> Message-ID: <20130611224004.GA25483@zion.uk.xensource.com> >>> Content-Type: text/plain; charset="us-ascii" >>> >>> Hello, >>> >>> I''ve seen your mail to xen-devel as well. Given that you''re still in >>> configuration phase, my gut feeling is that this is the proper list to >>> post. When you have questions about Xen code / development workflow you >>> can ask them on xen-devel. >>> >>> On Tue, Jun 11, 2013 at 04:39:39PM -0500, ranjith krishnan wrote: >>>> Hello, >>>> >>>> I am relatively new to Xen and need help compiling and installing Xen from >>>> source. >>>> >>>> Using some tutorials online, I have got Xen working with the ''yum install >>>> xen'' method. >>>> I used virt-manager and was able to get 2 domUs working ( CentOS 5, and >>>> Fedora 16). >>>> My domUs reside on Logical Volumes in an LVM, on a second hard disk sda2, >>>> while my dom0 is installed on sda1. Everything is working fine in this >>>> configuration. >>>> I want to use Xen 4.1 since I want to continue using >>>> virt-install/virt-manager for domU provisioning. >>>> >>>> For my work now, I want to install Xen from source and try to modify some >>>> source code files and test things out. >>>> I have seen some tutorials online, and I am not sure they give the complete >>>> picture. >>>> For ex, >>>> http://wiki.xen.org/wiki/Xen_4.2_Build_From_Source_On_RHEL_CentOS_Fedora >>>> Fedora 17 uses grub 2. When we do a yum install, the grub entries are taken >>>> care of and things just work. >>>> When I install from source, this is not the case. Are there any tutorials >>>> which give a complete picture? >>>> Or if someone has got Xen working from source on Fedora 16, 17 or 18, can >>>> you give me tips on how to edit grub configuration so that xen boots ok. >>>> I have tried and failed once compiling and installing Xen on Fedora 16, >>>> which is when I used yum. >>> >>> For the grub entry, the simplest method is to place your binary under >>> /boot and invoke update-grub2 (which is also invoked when you do ''yum >>> install'' if I''m not mistaken). In theory it should do the right thing. >>> >>> Another method to solve your problem is to modify grub.conf yourself. >>> Just copy the entry that ''yum install'' adds in grub.conf, replace the >>> binary file name with the one you compile and you''re all set. >>> >>> You might also find this page useful if you''re to develop Xen. >>> http://wiki.xen.org/wiki/Xen_Serial_Console >>> (it also contains sample entries for legacy grub and grub2, nice ;-) ) >>> >>> >>> Wei. >>> >>>> >>>> >>>> -- >>>> Ranjith krishnan >>> >>>> _______________________________________________ >>>> Xen-users mailing list >>>> Xen-users@lists.xen.org >>>> http://lists.xen.org/xen-users >>> >>> >>> >>> >>> ------------------------------ >>> >>> Message: 15 >>> Date: Tue, 11 Jun 2013 19:01:33 -0600 >>> From: jacek burghardt <jaceksburghardt@gmail.com> >>> To: xen-users <xen-users@lists.xen.org> >>> Subject: [Xen-users] pv assign pci device >>> Message-ID: >>> <CAHyyzzQ53ZHYExKQ15TQSMdaXuN6t7_+wuJnFMFywvwJDYrBGA@mail.gmail.com> >>> Content-Type: text/plain; charset="iso-8859-1" >>> >>> I have xeon quad core server I wonder if is possible to assign pci usb >>> device to pv if the server does not suport iommu vd-t >>> I had blacklisted usb modules and hid devices and devices are listed as >>> assignable >>> but when I add them to pv I get this error libxl: error: libxl: error: >>> libxl_pci.c:989:libxl__device_pci_reset: The kernel doesn''t support reset >>> from sysfs for PCI device 0000:00:1d.0 >>> libxl: error: libxl_pci.c:989:libxl__device_pci_reset: The kernel doesn''t >>> support reset from sysfs for PCI device 0000:00:1d.1 >>> Daemon running with PID 897 >>> -------------- next part -------------- >>> An HTML attachment was scrubbed... >>> URL: <http://lists.xen.org/archives/html/xen-users/attachments/20130611/6e5ccfba/attachment.html> >>> >>> ------------------------------ >>> >>> Message: 16 >>> Date: Wed, 12 Jun 2013 07:13:15 +0100 >>> From: Gordan Bobic <gordan@bobich.net> >>> To: jacek burghardt <jaceksburghardt@gmail.com> >>> Cc: xen-users <xen-users@lists.xen.org> >>> Subject: Re: [Xen-users] pv assign pci device >>> Message-ID: <51B8117B.3020404@bobich.net> >>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >>> >>> On 06/12/2013 02:01 AM, jacek burghardt wrote: >>>> I have xeon quad core server I wonder if is possible to assign pci usb >>>> device to pv if the server does not suport iommu vd-t >>>> I had blacklisted usb modules and hid devices and devices are listed as >>>> assignable >>>> but when I add them to pv I get this error libxl: error: libxl: error: >>>> libxl_pci.c:989:libxl__device_pci_reset: The kernel doesn''t support >>>> reset from sysfs for PCI device 0000:00:1d.0 >>>> libxl: error: libxl_pci.c:989:libxl__device_pci_reset: The kernel >>>> doesn''t support reset from sysfs for PCI device 0000:00:1d.1 >>>> Daemon running with PID 897 >>> >>> I don''t think that is a fatal error. I get that on, for example, the VGA >>> card passed through to the VM, but it still works inside the domU. It >>> just means the device doesn''t support FLR. >>> >>> Gordan >>> >>> >>> >>> >>> ------------------------------ >>> >>> Message: 17 >>> Date: Wed, 12 Jun 2013 00:30:06 +0200 >>> From: Dario Faggioli <dario.faggioli@citrix.com> >>> To: Ian Campbell <Ian.Campbell@citrix.com> >>> Cc: xen-users@lists.xen.org, Russ Pavlicek >>> <russell.pavlicek@xenproject.org> >>> Subject: Re: [Xen-users] Blog: Installing the Xen hypervisor on Fedora >>> 19 >>> Message-ID: <1370989806.20028.51.camel@Solace> >>> Content-Type: text/plain; charset="utf-8" >>> >>> On gio, 2013-06-06 at 09:52 +0100, Ian Campbell wrote: >>>> On Wed, 2013-06-05 at 22:11 -0400, Russ Pavlicek wrote: >>>>> Saw this post from Major Hayden of Rackspace: >>>>> >>>>> http://major.io/2013/06/02/installing-the-xen-hypervisor-on-fedora-19/ >>>> >>>> It''d be good to get this linked from >>>> http://wiki.xen.org/wiki/Category:Fedora >>> Well, although I''m very happy about blog posts like these starting to >>> come up spontaneously all around the place, allow me to say tat we have >>> the Fedora host install page on the Wiki >>> (http://wiki.xen.org/wiki/Fedora_Host_Installation) that contains >>> exactly the same information (it actually has much more info, and it is >>> of course part of the Fedora wiki category!) >>> >>> That being said, I guess I can add a section there (in the Fedora >>> Category page) about ''external'' pages, posts, etc... Let me think how >>> and where to put it... >>> >>> Thanks and Regards, >>> Dario >>> >>> -- >>> <<This happens because I choose it to happen!>> (Raistlin Majere) >>> ----------------------------------------------------------------- >>> Dario Faggioli, Ph.D, http://about.me/dario.faggioli >>> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) >>> >>> -------------- next part -------------- >>> A non-text attachment was scrubbed... >>> Name: signature.asc >>> Type: application/pgp-signature >>> Size: 198 bytes >>> Desc: This is a digitally signed message part >>> URL: <http://lists.xen.org/archives/html/xen-users/attachments/20130612/67c37d4d/attachment.pgp> >>> >>> ------------------------------ >>> >>> Message: 18 >>> Date: Wed, 12 Jun 2013 09:01:56 +0200 >>> From: Dario Faggioli <dario.faggioli@citrix.com> >>> To: xen-devel@lists.xen.org >>> Cc: xen-users@lists.xen.org, xen-api@lists.xen.org >>> Subject: [Xen-users] Xen Test Day is today! >>> Message-ID: <1371020516.9946.5.camel@Abyss> >>> Content-Type: text/plain; charset="utf-8" >>> >>> Hi everybody, >>> >>> Allow me to remind you that the 4th Xen Test Day is happening today, so >>> come and join us on #xentest on freenode! >>> >>> We will be testing Xen 4.3 RC4, released yesterday and, probably, *the* >>> *last* release candidate! For more info, see: >>> >>> - on Xen Test Days: >>> http://wiki.xen.org/wiki/Xen_Test_Days >>> >>> - on getting and testing RC4: >>> http://wiki.xen.org/wiki/Xen_4.3_RC4_test_instructions >>> >>> - for generic testing information: >>> http://wiki.xen.org/wiki/Testing_Xen >>> >>> See you all on freenode, channel #xentest. >>> >>> Regards >>> Dario >>> >>> -- >>> <<This happens because I choose it to happen!>> (Raistlin Majere) >>> ----------------------------------------------------------------- >>> Dario Faggioli, Ph.D, http://about.me/dario.faggioli >>> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) >>> >>> -------------- next part -------------- >>> A non-text attachment was scrubbed... >>> Name: signature.asc >>> Type: application/pgp-signature >>> Size: 198 bytes >>> Desc: This is a digitally signed message part >>> URL: <http://lists.xen.org/archives/html/xen-users/attachments/20130612/2fcb2e25/attachment.pgp> >>> >>> ------------------------------ >>> >>> Message: 19 >>> Date: Wed, 12 Jun 2013 09:44:02 +0200 >>> From: Fabio Fantoni <fabio.fantoni@m2r.biz> >>> To: Dario Faggioli <dario.faggioli@citrix.com> >>> Cc: xen-users@lists.xen.org, xen-api@lists.xen.org, >>> xen-devel@lists.xen.org >>> Subject: Re: [Xen-users] [Xen-devel] Xen Test Day is today! >>> Message-ID: <51B826C2.3030706@m2r.biz> >>> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed" >>> >>> Il 12/06/2013 09:01, Dario Faggioli ha scritto: >>>> Hi everybody, >>>> >>>> Allow me to remind you that the 4th Xen Test Day is happening today, so >>>> come and join us on #xentest on freenode! >>>> >>>> We will be testing Xen 4.3 RC4, released yesterday and, probably, *the* >>>> *last* release candidate! For more info, see: >>>> >>>> - on Xen Test Days: >>>> http://wiki.xen.org/wiki/Xen_Test_Days >>>> >>>> - on getting and testing RC4: >>>> http://wiki.xen.org/wiki/Xen_4.3_RC4_test_instructions >>>> >>>> - for generic testing information: >>>> http://wiki.xen.org/wiki/Testing_Xen >>>> >>>> See you all on freenode, channel #xentest. >>>> >>>> Regards >>>> Dario >>>> >>>> >>>> >>>> _______________________________________________ >>>> Xen-devel mailing list >>>> Xen-devel@lists.xen.org >>>> http://lists.xen.org/xen-devel >>> I saw that qemu upstrem tag is not updated (on Config.mk >>> QEMU_UPSTREAM_REVISION ?= qemu-xen-4.3.0-rc1) but on git there are new >>> patches, why? >>> -------------- next part -------------- >>> An HTML attachment was scrubbed... >>> URL: <http://lists.xen.org/archives/html/xen-users/attachments/20130612/d02fbfa4/attachment.html> >>> >>> ------------------------------ >>> >>> _______________________________________________ >>> Xen-users mailing list >>> Xen-users@lists.xen.org >>> http://lists.xen.org/xen-users >>> >>> >>> End of Xen-users Digest, Vol 100, Issue 17 >>> ****************************************** >> >> _______________________________________________ >> Xen-users mailing list >> Xen-users@lists.xen.org >> http://lists.xen.org/xen-users
On (12/06/13 21:03), Anastas Semenov wrote:> Date: Wed, 12 Jun 2013 21:03:40 +0000 > From: Anastas Semenov <anastas.semenov@gmail.com> > To: xen-users@lists.xen.org > Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN > > Some people have been asking about ZFS configurations used for Gluster bricks.. > Well.. it''s quire simple. > On a 36 drive machine, we chose to configure it with 3 bricks @ 12 > drives per brick: > - every brick consists of 12 drives > - 10 drives are are used for RAIDZ1 (2TB 3.5" WD Enterprise Black HDD) > - 1 drive is used as cache (64GB 2.5" SSD with AdaptaDrive bracket > for great fit) > - 1 drive is used as a spare (2TB 3.5" WD Enterprise Black HDD)Firehose much?(takes a big gulp) Excellent info, Anastas- thanks for taking the time to write it up!
On Thu, 13 Jun 2013 07:02:10 -0500, James Triplett <jm-xenusers@vj8.net> wrote:> >> - 10 drives are are used for RAIDZ1 (2TB 3.5" WD Enterprise Black HDD)The problem I see with this is that you''re absolutely killing your random I/O performance. In this configuration every 12 drive JBOD brick will perform random I/O about as well as a single hard drive. Also 10 drives is too much for RAIDZ1.
On Wed, 06/12/2013 05:03 PM, Anastas Semenov <anastas.semenov@gmail.com> wrote:> Some people have been asking about ZFS configurations used for Gluster bricks..this may sound like a stupid question, but are you running ZFS on Solaris or Linux?
Hello Errol, We''re running ZFS on Debian Wheezy (on slightly older storage, we are running 2 pools ~40TB each on Debian Squeeze with 3.2 kernel from backports) Anastas S sysadmin++ On Thu, Jun 13, 2013 at 2:02 PM, Errol Neal <eneal@businessgrade.com> wrote:> On Wed, 06/12/2013 05:03 PM, Anastas Semenov <anastas.semenov@gmail.com> wrote: >> Some people have been asking about ZFS configurations used for Gluster bricks.. > > this may sound like a stupid question, but are you running ZFS on Solaris or Linux?
On Thu, 06/13/2013 10:14 AM, Anastas Semenov <anastas.semenov@gmail.com> wrote:> Hello Errol, > > We''re running ZFS on Debian Wheezy (on slightly older storage, we are > running 2 pools ~40TB each on Debian Squeeze with 3.2 kernel from > backports) > > Anastas SWow.. I''d call you courageous at the very least! I know that ZFS on Linux is supposedly "ready for wide scale deployment" but I have my doubts (not based on real world tests of course).
On 6/13/13, Errol Neal <eneal@businessgrade.com> wrote:> On Thu, 06/13/2013 10:14 AM, Anastas Semenov <anastas.semenov@gmail.com> > wrote: >> Hello Errol, >> >> We''re running ZFS on Debian Wheezy (on slightly older storage, we are >> running 2 pools ~40TB each on Debian Squeeze with 3.2 kernel from >> backports) >> >> Anastas S > > Wow.. I''d call you courageous at the very least! I know that ZFS on Linux > is supposedly "ready for wide scale deployment" but I have my doubts (not > based on real world tests of course). > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xen.org > http://lists.xen.org/xen-users >You lost me at backports. Are you using this in production? N.
Nick, Debian 6.x (aka Squeeze runs older kernel 2.6.32, newer kernel version 3.2 is available from squeeze-backports, which is just another repo containing newer packages backported from much "fresher" testing branch) But yes, we are using it in production.. Anastas S sysadmin++ On Thu, Jun 13, 2013 at 2:30 PM, Nick Khamis <symack@gmail.com> wrote:> On 6/13/13, Errol Neal <eneal@businessgrade.com> wrote: >> On Thu, 06/13/2013 10:14 AM, Anastas Semenov <anastas.semenov@gmail.com> >> wrote: >>> Hello Errol, >>> >>> We''re running ZFS on Debian Wheezy (on slightly older storage, we are >>> running 2 pools ~40TB each on Debian Squeeze with 3.2 kernel from >>> backports) >>> >>> Anastas S >> >> Wow.. I''d call you courageous at the very least! I know that ZFS on Linux >> is supposedly "ready for wide scale deployment" but I have my doubts (not >> based on real world tests of course). > > You lost me at backports. Are you using this in production? > > N.
Hey Mark,>>> - 10 drives are are used for RAIDZ1 (2TB 3.5" WD Enterprise Black HDD) > > > The problem I see with this is that you''re absolutely killing your random > I/O performance. In this configuration every 12 drive JBOD brick will > perform random I/O about as well as a single hard drive.There are choices to be made, as always capacity vs performance... We are setting this cluster up for Hadoop, with large(r) files and a lot of sequential reads.. I recognize that switching to striped 8+2 raidz2 groups would be much better, and I''ve love to do it, as long as I can get the $$ for it..> Also 10 drives is too much for RAIDZ1.While 10 drive raidz1 is not most optimal, it is still pretty close to recommended 4-8 drives. Perhaps, we are better off going to raidz2.. This is still very much in evaluation stage.. but so far, we''ve seen reasonable performance. More testing required.. Check out this page on ZFS for Luste with Infiniband and 10GigE: http://zfsonlinux.org/llnl-zfs-lustre.html Anastas S sysadmin++
On Thu, 13 Jun 2013 09:42:54 -0500, Anastas Semenov <anastas.semenov@gmail.com> wrote:> > While 10 drive raidz1 is not most optimal, it is still pretty close to > recommended 4-8 drives. > Perhaps, we are better off going to raidz2.. This is still very much > in evaluation stage.. but so far, we''ve seen reasonable performance. > More testing required..Your 10 drive RAIDZ1 might also not be optimal because you effectively have 9 data drives and 1 parity drive. Now I know ZFS doesn''t work like normal RAID5/6 semantics and dedicate entire drives to parity, but the concept for how it splits the data and stripes it across the drives can be considered similar. So what you''re doing is taking the dataset to be written and dividing it by an odd number (9 chunks + parity). Now I don''t have the ability to point to the code or provide benchmarks at the moment so I''m just parroting what I''ve been told, but I''m pretty confident you''d get better performance by doing 10 drive RAIDZ2 (data being split into 8 chunks, an even number) or doing 9 drive RAIDZ or 11 drive RAIDZ. In your situation I''d probably go for two 5-drive RAIDZ vdevs if I could afford the loss of storage. My benchmarks I did when I built my system did seem to mirror the results I was told to expect. Good luck, and you can never do too much testing!
On Thu, 13 Jun 2013 14:42:54 +0000, Anastas Semenov <anastas.semenov@gmail.com> wrote:> While 10 drive raidz1 is not most optimal, it is still pretty close > to > recommended 4-8 drives.Pretty close is not really much better than any other imperfect value. This is because ZFS stores data in vriable length stripes that can only be powers of 2 <= 128KB. This means that you should keep your RAIDZ''s small and close to the optimal data+parity disk counts. Then again, any RAIDZ[123] is going to be slow enough that you probably won''t be able to tell much difference (much like RAID[56]). Gordan
Exactly my thought, Gordon.. But I will certainly consider raidz2 Anastas S sysadmin++>> While 10 drive raidz1 is not most optimal, it is still pretty close to >> recommended 4-8 drives. > > > Pretty close is not really much better than any other imperfect value. > This is because ZFS stores data in vriable length stripes that can > only be powers of 2 <= 128KB. This means that you should keep your > RAIDZ''s small and close to the optimal data+parity disk counts. > > Then again, any RAIDZ[123] is going to be slow enough that you > probably won''t be able to tell much difference (much like RAID[56]). > > Gordan
Hi all, I''m enjoying learning about things I haven''t heard of or weren''t stable when I made my choices years ago... Just a thought... I''ve got an old system myself (DRDB - raid mirrors backed to another server without mirror - two servers running kind of reciprocal: A-Vol1 (raid1) -----> B Vol3 (single drive) A-Vo2 (raid1) -----> B Vol4 (single drive) A-Vol3 (single drive) <----- B Vol1 (raid1) A-Vol4 (single drive) <----- B Vol2 (raid1) The idea being that a failure reduces performance - doesn''t result in a server being idle. So far so good. Might be nice to see some of the reasoning behind some of these architecture choices that more might benefit from the knowledge (or learn more from feedback) as people get to dissect the reason they chose "raid X on file system Y" for their environment. I''m enjoying the information - hope I can make use of it next time I can afford / need to upgrade. Cheers. Mitch. -----Original Message----- From: xen-users-bounces@lists.xen.org [mailto:xen-users-bounces@lists.xen.org] On Behalf Of Gordan Bobic Sent: June 13, 2013 8:47 AM To: Anastas Semenov Cc: xen-users@lists.xen.org; Mark Felder; James Triplett Subject: Re: [Xen-users] Linux Fiber or iSCSI SAN