Longina Przybyszewska
2009-Apr-06 10:26 UTC
[Xen-users] multiple iscsi targets on bonding interface
Hi, I have a bonding interface to service traffic between Dom0 and SAN - it works great with one iscsi target. Is it possible to manage more iscsi targets on the same bonding? Would it work using, say, alias bond0.222:1 ? Would it be more flexible/stable to have a bridge on top of bonding - does someone has experience with such configuration? regards Longina -- -- Longina Przybyszewska, system programmer IT@Naturvidenskab IMADA, Department of Mathematics and Computer Science University of Southern Denmark, Odense Campusvej 55,DK-5230 Odense M, Denmark tel: +45 6550 2359 - http://www.imada.sdu.dk email: longina@imada.sdu.dk -- _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ferenc Wagner
2009-Apr-06 23:03 UTC
[Xen-users] Re: multiple iscsi targets on bonding interface
Longina Przybyszewska <longina@imada.sdu.dk> writes:> I have a bonding interface to service traffic between Dom0 and SAN - > it works great with one iscsi target. > Is it possible to manage more iscsi targets on the same bonding?We maintain iSCSI-backed virtual machines: each machine uses a separate iSCSI target as its disk. So it''s possible. Or maybe I don''t understand the question.> Would it work using, say, alias bond0.222:1 ?Why do you want an alias interface?> Would it be more flexible/stable to have a bridge on top of bonding - > does someone has experience with such configuration?We have 802.1q VLAN interfaces on the bond, which are bridged with the virtual machine interfaces. Thus each virtual machine has access to the necessary VLANs via separate virtual interfaces. But this hasn''t got anything to do with iSCSI, which is mostly managed by the dom0. We also have a couple of iSCSI rooted domUs, but that doesn''t make a difference. -- Feri. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ferenc Wagner
2009-Apr-10 15:16 UTC
[Xen-users] Re: multiple iscsi targets on bonding interface
(Please keep the thread on the mailing list.) Longina Przybyszewska <longina@imada.sdu.dk> writes:> On Tue, 7 Apr 2009, Ferenc Wagner wrote: > >> We maintain iSCSI-backed virtual machines: each machine uses a >> separate iSCSI target as its disk. So it''s possible. Or maybe >> I don''t understand the question. > > I understand that if I would have multiple iscsi luns managed by Dom0, > I have to have the possibility for multiple TCP/IP connections from > Dom0, that means multiple network interfaces, iface0, iface1... in > openiscsi terminology. > This is why, I think about alias interfaces, in our case > bond1.xxx:{0,1,2,3}.I''m not sure what "iscsi luns managed by dom0" mean here, but you don''t need multiple network interfaces (or IP address or ports) neither for exporting multiple iSCSI targets from your iSCSI server, nor for logging into multiple iSCSI targets from a client node.> Actually we have a Xen server with 2 pairs bonding interfaces: > > - BOND0 is like yours - 802.1q vlan interfaces on top of the bond, > plus bridges on top of each vlan. Virtual machines have access to > different vlans via interfaces bridged to the specific vlan-bridge. > > There is another bonding interface, BOND1, configured on ordinary > access port, for accessing SAN-storage vlan. > Meaning with bond1 is - to have seperate interface > for storage traffic==> for accessing iscsi targets.OK. This is purely a performance tweak (which can be important).> My "missing link" is how to access multiple iscsi luns on Dom0, or > how to make DomUs accessing seperate iscsi luns if binding should > happen via bond1?Now you have to be clearer. Do you want to access the iSCSI targets from the dom0 (to provide boot disks for your domUs) or from the domUs (to gain extra storage after boot)? In the first case the (dom0) kernel IP routing should take care of everything. In the second case you should share bond1 with the respective clients via a common bridge on top of the bond.> I was thinking about bridge on top of bond1 - each iscsi client > machine (Dom0, Domus) could bridged its "storage iface" to. > But I had some routning problems in Dom0 and gave up.Ah, so you want to make both dom0 and the domUs iSCSI clients! In this case you have to assign an IP address from your storage network to bond1 as well. And to each domU, of course. And shouldn''t have routing problems. :)>> But this hasn''t got anything to do with iSCSI, which is mostly managed >> by the dom0. We also have a couple of iSCSI rooted domUs, but that >> doesn''t make a difference. > > Do we talk about iscsi luns (/dev/sd{a,b,c,d} or a one huge iscsi-lun > LVM-partitioned into smaller pieces for DomUs root/swap/data?Don''t confuse the different layers. Each of our PV domUs are backed by independent iSCSI targets, to make independent live migration of domUs possible. Each domU sees its assigned target as virtual disk /dev/xvda, and has no idea whatsoever that it is an iSCSI target in reality. Then each domU uses this virtual disk as it wants: some partition it, some use it as an LVM physical volume, some put a filesystem straight on it. iSCSI is absolutely out of the picture here, with the exception of the iSCSI rooted domUs, which, on the other hand, have no disk devices assigned to them by Xen: they are (virtual) "diskless"; the Xen host doesn''t know they mount iSCSI devices as their roots from initramfs. -- Cheers, Feri. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Jeff Williams
2009-May-06 02:35 UTC
OT: Re: [Xen-users] Re: multiple iscsi targets on bonding interface
On 10/04/09 23:16, Ferenc Wagner wrote:> Don''t confuse the different layers. Each of our PV domUs are backed > by independent iSCSI targets, to make independent live migration of > domUs possible. Each domU sees its assigned target as virtual disk > /dev/xvda, and has no idea whatsoever that it is an iSCSI target in > reality. Then each domU uses this virtual disk as it wants: some > partition it, some use it as an LVM physical volume, some put a > filesystem straight on it. iSCSI is absolutely out of the picture > here, with the exception of the iSCSI rooted domUs, which, on the > other hand, have no disk devices assigned to them by Xen: they are > (virtual) "diskless"; the Xen host doesn''t know they mount iSCSI > devices as their roots from initramfs. >Ferenc, Just out of interest, do have any problems managing all of those iSCSI targets across all of your Xen dom0s? I would imagine you''d end up with a very large number of /dev/sd* devices on all of the dom0s? Regards, Jeff _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Christopher Chen
2009-May-06 05:22 UTC
Re: OT: Re: [Xen-users] Re: multiple iscsi targets on bonding interface
My first stab about it involved setting up cLVM across a Centos/RH Cluster with rgmanager handling the DomU provisioning. It all works, but that means LVM gets put in the sandwich at least three times--a bit much. My idea for next time is to use dm-multipath on the Dom0 that uses friendly names associated with scsiSN set on IET on the target--namely, I provision a lun on the target, give it a unique scsiSN, then with dm-multipath friendly name mapping, I associate that scsiSN with, for instance, /dev/mpath/target.guest.root for whatever you want, that way I can refer to the friendly names in my DomU configs and all I have to do is manage a single list that gets propagated via cfengine or whatever. My 2c. cc On Tue, May 5, 2009 at 7:35 PM, Jeff Williams <jeffw@globaldial.com> wrote:> On 10/04/09 23:16, Ferenc Wagner wrote: >> >> Don''t confuse the different layers. Each of our PV domUs are backed >> by independent iSCSI targets, to make independent live migration of >> domUs possible. Each domU sees its assigned target as virtual disk >> /dev/xvda, and has no idea whatsoever that it is an iSCSI target in >> reality. Then each domU uses this virtual disk as it wants: some >> partition it, some use it as an LVM physical volume, some put a >> filesystem straight on it. iSCSI is absolutely out of the picture >> here, with the exception of the iSCSI rooted domUs, which, on the >> other hand, have no disk devices assigned to them by Xen: they are >> (virtual) "diskless"; the Xen host doesn''t know they mount iSCSI >> devices as their roots from initramfs. >> > > Ferenc, > > Just out of interest, do have any problems managing all of those iSCSI > targets across all of your Xen dom0s? I would imagine you''d end up with a > very large number of /dev/sd* devices on all of the dom0s? > > Regards, > Jeff > > > _______________________________________________ > Xen-users mailing list > Xen-users@lists.xensource.com > http://lists.xensource.com/xen-users >-- Chris Chen <muffaleta@gmail.com> "I want the kind of six pack you can''t drink." -- Micah _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ferenc Wagner
2009-May-06 09:16 UTC
[Xen-users] Re: OT: Re: multiple iscsi targets on bonding interface
Jeff Williams <jeffw@globaldial.com> writes:> On 10/04/09 23:16, Ferenc Wagner wrote: > >> Don''t confuse the different layers. Each of our PV domUs are backed >> by independent iSCSI targets, to make independent live migration of >> domUs possible. Each domU sees its assigned target as virtual disk >> /dev/xvda, and has no idea whatsoever that it is an iSCSI target in >> reality. Then each domU uses this virtual disk as it wants: some >> partition it, some use it as an LVM physical volume, some put a >> filesystem straight on it. iSCSI is absolutely out of the picture >> here, with the exception of the iSCSI rooted domUs, which, on the >> other hand, have no disk devices assigned to them by Xen: they are >> (virtual) "diskless"; the Xen host doesn''t know they mount iSCSI >> devices as their roots from initramfs. > > Just out of interest, do have any problems managing all of those iSCSI > targets across all of your Xen dom0s? I would imagine you''d end up > with a very large number of /dev/sd* devices on all of the dom0s?Sure, there''s quite some of them. But computers are supposed to be good exactly in this field, aren''t they? Anyway, I don''t use the /dev/sd* nodes in the domU configs directly, but the /dev/disk/by-path symlinks. Those are presistent and descriptive at the same time (if configured correctly on the target device). For the initiator side config I created a simple script, which makes the necessary changes to the default configuration of the open-iscsi Debian package. That''s about it. Cheers, Feri. #!/bin/bash -e iqn=<COMMON TARGET NAME PREFIX> portal=<SAN PORTAL> while [ -n "$1" ]; do case "$1" in -n) debug=echo;; -l) login=yes;; -d) $debug iscsiadm --mode discovery --type sendtargets -p $portal;; *) echo "targetconf: unknown switch: $1" >&2; exit 1;; esac shift done while read lun pass; do while read name value; do $debug iscsiadm -m node -T $iqn:$lun -p $portal -o update -n $name -v $value done <<EOF node.startup automatic node.session.auth.authmethod CHAP node.session.auth.username $lun node.session.auth.password $pass node.session.auth.username_in <SAN USER> node.session.auth.password_in <SAN PASSWORD> node.session.timeo.replacement_timeout 2400 EOF [ -n "$login" ] && $debug iscsiadm -m node -T $iqn:$lun -p $portal -l done <<EOF <TARGET_NAME1> <PASSWORD1> <TARGET_NAME2> <PASSWORD2> [...] EOF _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Ferenc Wagner
2009-May-06 09:24 UTC
[Xen-users] Re: OT: Re: multiple iscsi targets on bonding interface
Christopher Chen <muffaleta@gmail.com> writes:> My idea for next time is to use dm-multipath on the Dom0 that uses > friendly names associated with scsiSN set on IET on the target -- > namely, I provision a lun on the target, give it a unique scsiSN, > then with dm-multipath friendly name mapping, I associate that > scsiSN with, for instance, /dev/mpath/target.guest.root for whatever > you want, that way I can refer to the friendly names in my DomU > configs and all I have to do is manage a single list that gets > propagated via cfengine or whatever.We do something very similar with our FC based domUs: the multipath config chooses aliases based on the wwid of the LUN; the corresponding aliases are extracted by an EMC specific utility from the SAN and massaged into a multipath config by a home-made one-liner. For our iSCSI the redundancy is provided by the network level, so multipath isn''t necessary, and udev achieves a convenient persistent naming by default. -- Cheers, Feri. _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users