We''re currently designing a ZFS fileserver environment with iSCSI based storage (for failover, cost, ease of expansion, and so on). As part of this we would like to use multipathing for extra reliability, and I am not sure how we want to configure it. Our iSCSI backend only supports multiple sessions per target, not multiple connections per session (and my understanding is that the Solaris initiator doesn''t currently support multiple connections anyways). However, we have been cautioned that there is nothing in the backend that imposes a global ordering for commands between the sessions, and so disk IO might get reordered if Solaris''s multipath load balancing submits part of it to one session and part to another. So: does anyone know if Solaris''s multipath and iSCSI systems already take care of this, or if ZFS already is paranoid enough to deal with this, or if we should configure Solaris multipathing to not load-balance? (A load-balanced multipath configuration is simpler for us to administer, at least until I figure out how to tell Solaris multipathing which is the preferrred network for any given iSCSI target so we can balance the overall network load by hand.) Thanks in advance. - cks
In /kernel/drv/scsi_vhci.conf you could do this load-balance="none"; That way mpxio would use only one device. I imagine you need a vid/pid entry also in scsi_vhci.conf for your target. Regards, Vic On Fri, Apr 4, 2008 at 3:36 PM, Chris Siebenmann <cks at cs.toronto.edu> wrote:> We''re currently designing a ZFS fileserver environment with iSCSI based > storage (for failover, cost, ease of expansion, and so on). As part of > this we would like to use multipathing for extra reliability, and I am > not sure how we want to configure it. > > Our iSCSI backend only supports multiple sessions per target, not > multiple connections per session (and my understanding is that the > Solaris initiator doesn''t currently support multiple connections > anyways). However, we have been cautioned that there is nothing in > the backend that imposes a global ordering for commands between the > sessions, and so disk IO might get reordered if Solaris''s multipath load > balancing submits part of it to one session and part to another. > > So: does anyone know if Solaris''s multipath and iSCSI systems already > take care of this, or if ZFS already is paranoid enough to deal > with this, or if we should configure Solaris multipathing to not > load-balance? > > (A load-balanced multipath configuration is simpler for us to > administer, at least until I figure out how to tell Solaris multipathing > which is the preferrred network for any given iSCSI target so we can > balance the overall network load by hand.) > > Thanks in advance. > > - cks > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
I assume you mean IPMP here, which refers to ethernet multipath. There is also the other meaning of multipath referring to multiple paths to the storage array typically enabled by stmsboot command. We run active-passive (failover) IPMP as it keeps things simple for us and I have run into some weird bugs with active-active IPMP configurations. If you run IPF software firewall, all rules must be stateless since state-table tracking across interfaces doesn''t work. This message posted from opensolaris.org
ZFS will handle out of order writes due to it transactional nature. Individual writes can be re-ordered safely. When the transaction commits it will wait for all writes and flush them; then write a new uberblock with the new transaction group number and flush that. Chris Siebenmann wrote:> We''re currently designing a ZFS fileserver environment with iSCSI based > storage (for failover, cost, ease of expansion, and so on). As part of > this we would like to use multipathing for extra reliability, and I am > not sure how we want to configure it. > > Our iSCSI backend only supports multiple sessions per target, not > multiple connections per session (and my understanding is that the > Solaris initiator doesn''t currently support multiple connections > anyways). However, we have been cautioned that there is nothing in > the backend that imposes a global ordering for commands between the > sessions, and so disk IO might get reordered if Solaris''s multipath load > balancing submits part of it to one session and part to another. > > So: does anyone know if Solaris''s multipath and iSCSI systems already > take care of this, or if ZFS already is paranoid enough to deal > with this, or if we should configure Solaris multipathing to not > load-balance? > > (A load-balanced multipath configuration is simpler for us to > administer, at least until I figure out how to tell Solaris multipathing > which is the preferrred network for any given iSCSI target so we can > balance the overall network load by hand.) > > Thanks in advance. > > - cks > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
| I assume you mean IPMP here, which refers to ethernet multipath. | | There is also the other meaning of multipath referring to multiple | paths to the storage array typically enabled by stmsboot command. We are currently looking at (and testing) the non-ethernet sort of multipathing, partly as being the simplest way to have those multiple paths to the iSCSI backend storage by using two completely independant networks. This should also give us greater aggregate bandwidth through the entire fabric. (Each iSCSI backend unit will have two network interfaces with separate IP addresses and so on.) - cks
Vincent Fox wrote:> I assume you mean IPMP here, which refers to ethernet multipath. >No. IPMP is IP multipathing. You can run IP over almost anything, even cups-n-string :-) -- richard
You DO mean IPMP then. That''s what I was trying to sort out, to make sure that you were talking about the IP part of things, the iSCSI layer. And not the paths from the "target" system to it''s local storage. You say "non-ethernet" for your network transport, what ARE you using? This message posted from opensolaris.org
Oh sure pick nits. Yeah I should have said "network multipath" instead of "ethernet multipath" but really how often do I encounter non-ethernet networks? I can''t recall the last time I saw a token ring or anything else. This message posted from opensolaris.org
Vincent Fox wrote:> You DO mean IPMP then. That''s what I was trying to sort out, to make sure that you were talking about the IP part of things, the iSCSI layer. And not the paths from the "target" system to it''s local storage. >There is more than one way to skin this cat. Fortunately there is already a Sun BluePrint on it, Using iSCSI Multipathing in the Solaris 10 Operating System. http://www.sun.com/blueprints/1205/819-3730.pdf> You say "non-ethernet" for your network transport, what ARE you using? > >WiFi mostly. DSL for some stuff. When you run IP, do you really care? Do you really know? :-) -- richard
| You DO mean IPMP then. That''s what I was trying to sort out, to make | sure th at you were talking about the IP part of things, the iSCSI | layer. My apologies for my lack of clarity. We are not looking at IPMP multipathing; we are using MPxIO multipathing (mpathadm et al), which operates at what one can think of as a higher level. (IPMP gives you a single session to iSCSI storage over multiple network devices. MPxIO and appropriate lower level iSCSI settings gives you multiple sessions to iSCSI storage over multiple networks and multiple network devices.) - cks
> ZFS will handle out of order writes due to it > transactional > nature. Individual writes can be re-ordered safely. > When the transaction > commits it will wait for all writes and flush them; > then write a > new uberblock with the new transaction group number > and flush that.Sure -- the question at hand is whether MPXio, when doing load balancing, is smart enough to recognize that the "flush" depends on the completion of writes on *both* LUNs. I''d think it would have to send the flush down both channels, and I doubt it does this special-casing right now. Anton This message posted from opensolaris.org
Apparently Analagous Threads
- List of supported multipath drivers
- Desperate question about MPXIO with ZFS-iSCSI
- CentOS and Dell MD3200i / MD3220i iSCSI w/ multipath
- iscsi storage, LACP or Multipathing | Migration or rebuild?
- Help setting up multipathing on CentOS 4.7 to an Equallogic iSCSI target