I''ve a dom0 with a debian lenny amd64 system and some domU still with debian lenny amd64. I''ve used lvm, set up a volume group and assigned each domU to in a logical volume. Now we have purchased some HBA to connect to a SAN storage. There is a way to connect my domU to the SAN storage? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, 2009-09-21 at 17:55 +0200, Mauro wrote:> I''ve a dom0 with a debian lenny amd64 system and some domU still with > debian lenny amd64. > I''ve used lvm, set up a volume group and assigned each domU to in a > logical volume. > Now we have purchased some HBA to connect to a SAN storage. > There is a way to connect my domU to the SAN storage?Yes. Like this: "I''ve used lvm, set up a volume group and assigned each domU to in a logical volume." By George, I think he''s got it. =) -- John Madden Sr UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Perhaps he means using PCI passthrough to let the domU access the hba directly? -Bruce On Mon, Sep 21, 2009 at 10:14 AM, John Madden <jmadden@ivytech.edu> wrote:> On Mon, 2009-09-21 at 17:55 +0200, Mauro wrote: > > I''ve a dom0 with a debian lenny amd64 system and some domU still with > > debian lenny amd64. > > I''ve used lvm, set up a volume group and assigned each domU to in a > > logical volume. > > Now we have purchased some HBA to connect to a SAN storage. > > There is a way to connect my domU to the SAN storage? > > Yes. Like this: > > "I''ve used lvm, set up a volume group and assigned each domU to in a > logical volume." > > By George, I think he''s got it. =) > > > > > -- > John Madden > Sr UNIX Systems Engineer > Ivy Tech Community College of Indiana > jmadden@ivytech.edu > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi, I''m looking information about SAN storage. In my case, I want to use 3 physical servers and 1 SAN to manage 10-15 virtual servers. What is the best solution : ISCSI (1Gb/s) or Fibre Channel (4Gb/s)? What kind of device are you using ? For the moment, I''m interesting about HP technologies with bladesystem c3000 and msa2324. Thanks. John Madden wrote:> On Mon, 2009-09-21 at 17:55 +0200, Mauro wrote: > >> I''ve a dom0 with a debian lenny amd64 system and some domU still with >> debian lenny amd64. >> I''ve used lvm, set up a volume group and assigned each domU to in a >> logical volume. >> Now we have purchased some HBA to connect to a SAN storage. >> There is a way to connect my domU to the SAN storage? >> > > Yes. Like this: > > "I''ve used lvm, set up a volume group and assigned each domU to in a > logical volume." > > By George, I think he''s got it. =) > > > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> In my case, I want to use 3 physical servers and 1 SAN to manage 10-15 > virtual servers. > > What is the best solution : ISCSI (1Gb/s) or Fibre Channel (4Gb/s)?That depends on your needs. I''m personally skittish about passing disk blocks over ethernet and prefer FC, but it''s expensive and if you believe [some of] the pundits, everything is going to ethernet eventually anyway. I think it''s still safe to say though that at this time, if you need really reliable disk at theoretically-higher performance and you can afford it, go with FC.> What kind of device are you using ?4Gb/s FC to EMC DMX-3 and IBM DS-4700 through Brocade fabrics.> For the moment, I''m interesting about HP technologies with > bladesystem c3000 and msa2324.FWIW, our experience with HP''s blades has been less than thrilling, wouldn''t recommend them. We have a few (5) of IBM''s BladeCenters though and we''ve been extremely happy with them. I think they''re more expensive up-front (?) but well worth it. John -- John Madden Sr UNIX Systems Engineer Ivy Tech Community College of Indiana jmadden@ivytech.edu _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Am 21.09.2009 20:16, schrieb John Madden:>> In my case, I want to use 3 physical servers and 1 SAN to manage 10-15 >> virtual servers. >> >> What is the best solution : ISCSI (1Gb/s) or Fibre Channel (4Gb/s)? > > That depends on your needs. I''m personally skittish about passing disk > blocks over ethernet and prefer FC, but it''s expensive and if you > believe [some of] the pundits, everything is going to ethernet > eventually anyway. I think it''s still safe to say though that at this > time, if you need really reliable disk at theoretically-higher > performance and you can afford it, go with FC. > >> What kind of device are you using ? > > 4Gb/s FC to EMC DMX-3 and IBM DS-4700 through Brocade fabrics. > >> For the moment, I''m interesting about HP technologies with >> bladesystem c3000 and msa2324. > > FWIW, our experience with HP''s blades has been less than thrilling, > wouldn''t recommend them. We have a few (5) of IBM''s BladeCenters though > and we''ve been extremely happy with them. I think they''re more > expensive up-front (?) but well worth it. > > John > > >FC is dead, go for 10 GB/s iSCSI (based on 10 GB/s Ethernet) its also cheaper... Florian _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Sep 21, 2009 at 02:07:59PM -0400, William wrote:> Hi, > > I''m looking information about SAN storage. > > In my case, I want to use 3 physical servers and 1 SAN to manage 10-15 > virtual servers. > > What is the best solution : ISCSI (1Gb/s) or Fibre Channel (4Gb/s)? >I''ve been using Equallogic iSCSI SAN storage for years without problems. It''s n*1 Gbit/sec, you can scale it with multiple NICs and dm-multipath. I don''t see any reason to go for FC nowadays, unless you have some special needs.> What kind of device are you using ? >Equallogic PS series arrays. -- Pasi _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> -----Original Message----- > From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > bounces@lists.xensource.com] On Behalf Of John Madden > Sent: Monday, September 21, 2009 2:17 PM > To: William > Cc: xen-users; Mauro > Subject: Re: [Xen-users] xen and SAN. > > > What is the best solution : ISCSI (1Gb/s) or Fibre Channel (4Gb/s)? > > That depends on your needs. I''m personally skittish about passingdisk> blocks over ethernet and prefer FC, but it''s expensive and if you > believe [some of] the pundits, everything is going to ethernet > eventually anyway. I think it''s still safe to say though that at this > time, if you need really reliable disk at theoretically-higher > performance and you can afford it, go with FC.FC is actually easier to justify in virtualized environments where the cost factors are less and the need for performance greater. Compare the costs of an HBA and switch fabric for 15 small physical hosts vs. 3 big dom0 hosts. Ethernet may never outperform FC; it''s main selling point is that it is cheaper and simpler to implement. -Jeff _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
2009/9/21 Bruce Edge <bruce.edge@gmail.com>:> Perhaps he means using PCI passthrough to let the domU access the hba > directly? >Really I don''t know, it is the first time I deal with SAN and xen. I know that we have an HBA with fiber channel. I need some advice on how to use a domU with a SAN storage. Specifically I have a mail server on a DomU with a debian lenny amd64 operating system. The filesystem is all on a logical volume named mail-disk of about 200Gb. I have all the mail account under /var/vmail. I want to create another mail server and use the same mail accounts. So my need is to move /var/vmail under SAN storage so it can be accessed by the first mail server and by the second mail server. Besides I need much more space for /var/vmail so to move it on a SAN storage I think is the best solution. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Pasi, Can I have more information about your topology ? - Number of physical and virtual servers per SAN (if it''s possible). With Equallogig by Dell, there is two controllers but 1 active an 1 passive, so if I understand, you have bonded all NIC (3) by controller to obtain 3 Gbit/sec? Thanks. Pasi Kärkkäinen wrote:> On Mon, Sep 21, 2009 at 02:07:59PM -0400, William wrote: >> Hi, >> >> I''m looking information about SAN storage. >> >> In my case, I want to use 3 physical servers and 1 SAN to manage 10-15 >> virtual servers. >> >> What is the best solution : ISCSI (1Gb/s) or Fibre Channel (4Gb/s)? >> > > I''ve been using Equallogic iSCSI SAN storage for years without problems. > It''s n*1 Gbit/sec, you can scale it with multiple NICs and dm-multipath. > > I don''t see any reason to go for FC nowadays, unless you have some special needs. > >> What kind of device are you using ? >> > > Equallogic PS series arrays. > > -- Pasi > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Sep 21, 2009 at 03:55:33PM -0400, William wrote:> Hi Pasi, > > Can I have more information about your topology ? > > - Number of physical and virtual servers per SAN (if it''s possible). >It depends what you mean with SAN.. With Equallogic storage arrays you can combine multiple arrays into a ''group'', so you can scale the setup that way. Volumes will be distributed across the members of the group and automatically loadbalanced (unless you want to manually configure things).> With Equallogig by Dell, there is two controllers but 1 active an 1 > passive, so if I understand, you have bonded all NIC (3) by controller > to obtain 3 Gbit/sec? >Yes, Equallogic is active/standby. If the primary (active) controller fails, the secondary (standby) controller takes over all the connections. You don''t do bonding with Equallogic. Instead you use multiple NICs and multipathing in the initiator to get n*1Gbit/sec. Equallogic storage array will redirect iSCSI connections to different ports, and loadbalance the sessions this way. So if you want to get 3 Gbit/sec to your server, it needs to have 3 gigabit NICs and you create a session/path from each NIC to the storage, and then use dm-multipath to combine the paths. -- Pasi> Thanks. > > Pasi Kärkkäinen wrote: > >On Mon, Sep 21, 2009 at 02:07:59PM -0400, William wrote: > >>Hi, > >> > >>I''m looking information about SAN storage. > >> > >>In my case, I want to use 3 physical servers and 1 SAN to manage 10-15 > >>virtual servers. > >> > >>What is the best solution : ISCSI (1Gb/s) or Fibre Channel (4Gb/s)? > >> > > > >I''ve been using Equallogic iSCSI SAN storage for years without problems. > >It''s n*1 Gbit/sec, you can scale it with multiple NICs and dm-multipath. > > > >I don''t see any reason to go for FC nowadays, unless you have some special > >needs. > > > >>What kind of device are you using ? > >> > > > >Equallogic PS series arrays. > > > >-- Pasi > > > > > >_______________________________________________ > >Xen-users mailing list > >Xen-users@lists.xensource.com > >http://lists.xensource.com/xen-users > > >_______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
>>> On 2009/09/21 at 12:28, Florian Manschwetus <florianmanschwetus@gmx.de> wrote:Am 21.09.2009 20:16, schrieb John Madden:>> In my case, I want to use 3 physical servers and 1 SAN to manage 10-15 >> virtual servers. >> >> What is the best solution : ISCSI (1Gb/s) or Fibre Channel (4Gb/s)? > > That depends on your needs. I''m personally skittish about passing disk > blocks over ethernet and prefer FC, but it''s expensive and if you > believe [some of] the pundits, everything is going to ethernet > eventually anyway. I think it''s still safe to say though that at this > time, if you need really reliable disk at theoretically-higher > performance and you can afford it, go with FC. > >> What kind of device are you using ? > > 4Gb/s FC to EMC DMX-3 and IBM DS-4700 through Brocade fabrics. > >> For the moment, I''m interesting about HP technologies with >> bladesystem c3000 and msa2324. > > FWIW, our experience with HP''s blades has been less than thrilling, > wouldn''t recommend them. We have a few (5) of IBM''s BladeCenters though > and we''ve been extremely happy with them. I think they''re more > expensive up-front (?) but well worth it. > > John > > >FC is dead, go for 10 GB/s iSCSI (based on 10 GB/s Ethernet) its also cheaper... Florian I''m not sure I buy that...if FC were dead, companies would not still be developing technologies based upon it...like 8Gb/s FC, which is alive and well. The major server manufacturers also still offer their servers - 1U, 2U, blade, etc. - with FC HBAs. Cisco is still building, selling, and supporting FC switches. FC is far from dead. Furthermore, even on 10Gb (not GB, Gb) iSCSI still has higher latency and higher overhead than FC, so even if your throughput is lower (even much lower), depending on the types of files you''re using on those FC connections FC may actually yield better performance than 10Gb iSCSI. Just my two bits... -Nick -------- This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Sep 21, 2009 at 4:22 PM, Nick Couchman <Nick.Couchman@seakr.com> wrote:> I''m not sure I buy that...if FC were dead, companies would not still be developing technologies based upon it...like 8Gb/s FC, which is alive and well. The major server manufacturers also still offer their servers - 1U, 2U, blade, etc. - with FC HBAs. Cisco is still building, selling, and supporting FC switches. FC is far from dead. Furthermore, even on 10Gb (not GB, Gb) iSCSI still has higher latency and higher overhead than FC, so even if your throughput is lower (even much lower), depending on the types of files you''re using on those FC connections FC may actually yield better performance than 10Gb iSCSI. Just my two bits...right after world peace, my big wish would be to have InfiniBand everywhere for storage... -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
> -----Original Message----- > From: xen-users-bounces@lists.xensource.com [mailto:xen-users- > bounces@lists.xensource.com] On Behalf Of Florian Manschwetus > Sent: Monday, September 21, 2009 2:29 PM > To: John Madden > Cc: William; xen-users; Mauro > Subject: Re: [Xen-users] xen and SAN. > > FC is dead, go for 10 GB/s iSCSI (based on 10 GB/s Ethernet) its alsocheaper... iSCSI isn''t the only game in town, even on Ethernet. An alternative to iSCSI worth considering for some Linux uses is AoE (ATA-over-Ethernet). It is inexpensive, simple to configure, and carries less protocol overhead than iSCSI, such that a dedicated HBA is usually of no benefit. Recent Linux aoe drivers also support multipath so that additional interfaces can be used to yield more throughput. Soon there will be another alternative, FCoE (based on 10Gbs Ethernet), but it doesn''t appear to be in widespread use at this time. -Jeff _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users