First, I''ve searched the list and couldn''t find anything. This is my scenario: - 5 Dom0 lenny servers, usign lvm with a lot of (domU) lv''s inside, domU/0'' storage in a local hdd and each server has two ethernet cards 1GB connected through a cisco 3750 swtich. - A storage (lenny) server running 6T raid10, big VG (empty for now) with 6 1GBE NICs, 5 NICs connected by a crossover to every dom0 separately and the other to the cisco switch. We want: Move lv''s devices from each dom0 to the new VG at the storage (I mean move every lvm''s device), then export this lv device in the storage server with vblade/Aoe or iscsi to every dom0, Dom0''s storage keeps local. Then we update every ./etc/xen/domU.cfg to work with the new device attahed at its dom0. finally xm create domU.cfg to start domU. I have, untested, a way to move lv''s from an old VG to a newone remote VG: cat /dev/VG1_dom0/lv1 | ssh storage "cat > /dev/VG2_storage/lv1" So, at the end I should have all lv''s devices in the storage server. final questions: do you know if it works? I mean if domU starts without problems using this devices attached? is there another better way to move several domU''s storages to a SAN server? do you think it''s a good idea to use crossovers between dom0-SAN instead a switch? I''d appreciate you comments. thanks in advance. -- Regards; Israel Garcia _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Fajar A. Nugraha
2009-Nov-02 05:56 UTC
Re: [Xen-users] moving LV''s devices to a SAN server.
On Sat, Oct 31, 2009 at 12:00 AM, Israel Garcia <igalvarez@gmail.com> wrote:> I have, untested, a way to move lv''s from an old VG to a newone remote VG: > cat /dev/VG1_dom0/lv1 | ssh storage "cat > /dev/VG2_storage/lv1" > So, at the end I should have all lv''s devices in the storage server.I prefer "dd" instead of "cat", but basically anything that can transfer the content of a block device or file should do.> > final questions: > > do you know if it works? I mean if domU starts without problems using > this devices attached?If you copy the storage (dd, cat, etc.) while domU is shut down then it should generally work.> is there another better way to move several domU''s storages to a SAN server?"better" is in the eye of the beholder :D For example, if the domU is windows, I prefer to dd the first 512byte of disk (to copy MBR and partition table) and use ntfsclone afterwards. If the domU is Linux I prefer to use mkfs + tar/rsync. It''s more efficient in terms of data transfer, but adds complexity somewhat.> do you think it''s a good idea to use crossovers between dom0-SAN > instead a switch?Depends on your setup. If your SAN has lots of ports and you can use at least two cables for crossover, it might be best to do so, thus eliminating switch as single point of failure. However on normal datacenter setups (with lots of servers accessing a SAN) you''d probably want to use redundant switch instead. -- Fajar _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi fajar, On 11/2/09, Fajar A. Nugraha <fajar@fajar.net> wrote:> On Sat, Oct 31, 2009 at 12:00 AM, Israel Garcia <igalvarez@gmail.com> wrote: >> I have, untested, a way to move lv''s from an old VG to a newone remote >> VG: >> cat /dev/VG1_dom0/lv1 | ssh storage "cat > /dev/VG2_storage/lv1" >> So, at the end I should have all lv''s devices in the storage server. > > I prefer "dd" instead of "cat", but basically anything that can > transfer the content of a block device or file should do. > >> >> final questions:Hi fajar,>> >> do you know if it works? I mean if domU starts without problems using >> this devices attached? > > If you copy the storage (dd, cat, etc.) while domU is shut down then > it should generally work.Ok..> >> is there another better way to move several domU''s storages to a SAN >> server? > > "better" is in the eye of the beholder :D > For example, if the domU is windows, I prefer to dd the first 512byte > of disk (to copy MBR and partition table) and use ntfsclone > afterwards. If the domU is Linux I prefer to use mkfs + tar/rsync. > It''s more efficient in terms of data transfer, but adds complexity > somewhat.Ok...> >> do you think it''s a good idea to use crossovers between dom0-SAN >> instead a switch? > > Depends on your setup. If your SAN has lots of ports and you can use > at least two cables for crossover, it might be best to do so, thus > eliminating switch as single point of failure. However on normal > datacenter setups (with lots of servers accessing a SAN) you''d > probably want to use redundant switch instead.I asked about using crossover cables because I don''t know if a redudant/switches setup at 1GB ethernet is enough to serve more than 10 dom0 (more than 150 domU) when using the storage thinking in bottleneck at ethernet level. Note that every domU''s storage should be at the storage server. I''ll appreciate any comments if have some experiences regarding this issue. thanks for your time. regards Israel.> > -- > Fajar >-- Regards; Israel Garcia _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Nov 2, 2009 at 8:30 AM, Israel Garcia <igalvarez@gmail.com> wrote:> I don''t know if a > redudant/switches setup at 1GB ethernet is enough to serve more than > 10 dom0 (more than 150 domU)note that most switches these days support some form of link aggregation; so you could pull say, 4 wires from your SAN box to the switch to get (almost) 4Gb total bandwith. -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Hi Javier, I''ve already read about link aggregation in the switch, also about setting up 802.3ad (bonding in mode4) at the dom0''s, but I don''t know if this setup works properly in the near future with the connection of new dom0''s to the SAN server. I''m confused about network topology design to build my SAN/DOM0''s. Do you know where can I find info/docs regarding storage network topologies (setups) for 1 gigabit ethernet network? There are tons of docs about 10GBE, FC, and so on, but for 1GB I could find any good doc about topologies/switches/redundancy. Can you help me? thanks in advance regards, Israel. On 11/2/09, Javier Guerra <javier@guerrag.com> wrote:> On Mon, Nov 2, 2009 at 8:30 AM, Israel Garcia <igalvarez@gmail.com> wrote: >> I don''t know if a >> redudant/switches setup at 1GB ethernet is enough to serve more than >> 10 dom0 (more than 150 domU) > > note that most switches these days support some form of link > aggregation; so you could pull say, 4 wires from your SAN box to the > switch to get (almost) 4Gb total bandwith. > > -- > Javier >-- Regards; Israel Garcia _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On Mon, Nov 2, 2009 at 11:39 AM, Israel Garcia <igalvarez@gmail.com> wrote:> Can you help me?not much, unfortunately. even if there are some standards, compliance is spotty at best, so you''ll have to test if your devices collaborate. in any case, this was my reasoning for mentioning port aggregation (or more precise, Link aggregation): - the ''usual'' topology for all things ethernet (including iSCSI), is to simply put the switches at the middle and pull one cable to each host. - in a SAN, this creates a bottleneck since it''s common to have just one or two storage boxes for several hosts (specially when just starting!). The single Ethernet port going to the storage box limits the total access bandwidth to just 1Gb for all hosts. - most iSCSI devices currently include several (4-6) GbE ports. - the naïve way to use all these ports would be to ditch the Ethernet switch, and just connect one host on each port. This gives you 1Gb dedicated for each host, and the total data bandwidth is limited to the platter and internal backbone speeds. - unfortunately, this strategy is too limiting for later growth. Not only you have a limited number of ports, but it also makes nearly impossible to add a second storage box. - so, what you can do is to keep the central switch, plug each host on a single port of the switch; but for the storage box, use several ports connected to several ports in the switch. if the link aggregation features of both the storage box and the switch match, now you have a single very fat link between the box and the switch. From the point of view of the hosts, it''s exactly the same as the ''usual'' topology (one device on each switch port); but a single host won''t be able to saturate the storage bandwith. - expandability also isn''t impaired, you can add extra hosts without any change, and also extra storage just by creating extra link aggregation groups. hope it helps, at least in clarifying the general concepts. for details you''ll have to consult the docs of both your storage box and switches, and experiment a lot! -- Javier _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 11/2/09, Javier Guerra <javier@guerrag.com> wrote:> On Mon, Nov 2, 2009 at 11:39 AM, Israel Garcia <igalvarez@gmail.com> wrote: >> Can you help me? > > not much, unfortunately. even if there are some standards, compliance > is spotty at best, so you''ll have to test if your devices collaborate. >Hi Javier, Very interesting your comment about link aggregation. thanks :-) I think this setup (using link aggregation in both sides) is the best way to achieve higher (1GBE) bandwidth over a ethernet network serving SAN boxes. I''ve searched the web a lot and I haven''t found other best setup. I''m going to test all LACP/bonding and If it''s possible I''ll send the list some results. thanks again. regards, Israel.> in any case, this was my reasoning for mentioning port aggregation (or > more precise, Link aggregation): > > - the ''usual'' topology for all things ethernet (including iSCSI), is > to simply put the switches at the middle and pull one cable to each > host. > > - in a SAN, this creates a bottleneck since it''s common to have just > one or two storage boxes for several hosts (specially when just > starting!). The single Ethernet port going to the storage box limits > the total access bandwidth to just 1Gb for all hosts. > > - most iSCSI devices currently include several (4-6) GbE ports. > > - the naïve way to use all these ports would be to ditch the Ethernet > switch, and just connect one host on each port. This gives you 1Gb > dedicated for each host, and the total data bandwidth is limited to > the platter and internal backbone speeds. > > - unfortunately, this strategy is too limiting for later growth. Not > only you have a limited number of ports, but it also makes nearly > impossible to add a second storage box. > > - so, what you can do is to keep the central switch, plug each host on > a single port of the switch; but for the storage box, use several > ports connected to several ports in the switch. if the link > aggregation features of both the storage box and the switch match, now > you have a single very fat link between the box and the switch. From > the point of view of the hosts, it''s exactly the same as the ''usual'' > topology (one device on each switch port); but a single host won''t be > able to saturate the storage bandwith. > > - expandability also isn''t impaired, you can add extra hosts without > any change, and also extra storage just by creating extra link > aggregation groups. > > hope it helps, at least in clarifying the general concepts. for > details you''ll have to consult the docs of both your storage box and > switches, and experiment a lot! > > -- > Javier >-- Regards; Israel Garcia _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users