I had always thought that with mpxio, it load-balances IO request across your storage ports but this article http://christianbilien.wordpress.com/2007/03/23/storage-array-bottlenecks/ has got me thinking its not true. "The available bandwidth is 2 or 4Gb/s (200 or 400MB/s ? FC frames are 10 bytes long -) per port. As load balancing software (Powerpath, MPXIO, DMP, etc.) are most of the times used both for redundancy and load balancing, I/Os coming from a host can take advantage of an aggregated bandwidth of two ports. However, reads can use only one path, but writes are duplicated, i.e. a host write ends up as one write on each host port. " Is this true? -- This message posted from opensolaris.org
On Sun, Apr 4, 2010 at 8:55 PM, Brad <beneri3 at yahoo.com> wrote:> I had always thought that with mpxio, it load-balances IO request across > your storage ports but this article > http://christianbilien.wordpress.com/2007/03/23/storage-array-bottlenecks/has got me thinking its not true. > > "The available bandwidth is 2 or 4Gb/s (200 or 400MB/s ? FC frames are 10 > bytes long -) per port. As load balancing software (Powerpath, MPXIO, DMP, > etc.) are most of the times used both for redundancy and load balancing, > I/Os coming from a host can take advantage of an aggregated bandwidth of two > ports. However, reads can use only one path, but writes are duplicated, i.e. > a host write ends up as one write on each host port. " > > Is this true? > -- >I have no idea what MPIO stack he''s talking about, but I''ve never heard anything operating like he''s talking about. Writes aren''t "duplicated on each port". The path a read OR write goes down depends on the host-side mpio stack, and how you have it configured to load-balance. It could be simple round-robin, it could be based on queue depth, it could be most recently used, etc. etc. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100404/a462cd36/attachment.html>
Torrey McMahon
2010-Apr-05 15:10 UTC
[zfs-discuss] mpxio load-balancing...it doesn''t work??
Not true. There are different ways that a storage array, and it''s controllers, connect to the host visible front end ports which might be confusing the author but i/o isn''t duplicated as he suggests. On 4/4/2010 9:55 PM, Brad wrote:> I had always thought that with mpxio, it load-balances IO request across your storage ports but this article http://christianbilien.wordpress.com/2007/03/23/storage-array-bottlenecks/ has got me thinking its not true. > > "The available bandwidth is 2 or 4Gb/s (200 or 400MB/s ? FC frames are 10 bytes long -) per port. As load balancing software (Powerpath, MPXIO, DMP, etc.) are most of the times used both for redundancy and load balancing, I/Os coming from a host can take advantage of an aggregated bandwidth of two ports. However, reads can use only one path, but writes are duplicated, i.e. a host write ends up as one write on each host port." > > Is this true?
Bob Friesenhahn
2010-Apr-05 17:36 UTC
[zfs-discuss] mpxio load-balancing...it doesn''t work??
On Sun, 4 Apr 2010, Brad wrote:> I had always thought that with mpxio, it load-balances IO request > across your storage ports but this article > http://christianbilien.wordpress.com/2007/03/23/storage-array-bottlenecks/ > has got me thinking its not true. > > "The available bandwidth is 2 or 4Gb/s (200 or 400MB/s ? FC frames > are 10 bytes long -) per port. As load balancing software > (Powerpath, MPXIO, DMP, etc.) are most of the times used both for > redundancy and load balancing, I/Os coming from a host can take > advantage of an aggregated bandwidth of two ports. However, reads > can use only one path, but writes are duplicated, i.e. a host write > ends up as one write on each host port. " > > Is this true?This text seems strange and wrong since duplicating writes would result in duplicate writes to disks, which could cause corruption if the ordering was not perfectly preserved. Depending on the storage array capabilities, MPXIO could use different strategies. A common strategy is active/standby on a per-LUN level. Even with active/standby, effective load sharing is possible if the storage array can be told to assign preference between a LUN and a port. That is what I have done with my own setup. 1/2 the LUNs have a preference for each port so that with all paths functional, the FC traffic is similar for each FC link. -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
I''m wondering if the author is talking about "cache mirroring" where the cache is mirrored between both controllers. If that is the case, is he saying that for every write to the active controlle,r a second write issued on the passive controller to keep the cache mirrored? -- This message posted from opensolaris.org
Torrey McMahon
2010-Apr-06 02:04 UTC
[zfs-discuss] mpxio load-balancing...it doesn''t work??
The author mentions multipathing software in the blog entry. Kind of hard to mix that up with cache mirroring if you ask me. On 4/5/2010 9:16 PM, Brad wrote:> I''m wondering if the author is talking about "cache mirroring" where the cache is mirrored between both controllers. If that is the case, is he saying that for every write to the active controlle,r a second write issued on the passive controller to keep the cache mirrored?
On Mon, Apr 5, 2010 at 8:16 PM, Brad <beneri3 at yahoo.com> wrote:> I''m wondering if the author is talking about "cache mirroring" where the > cache is mirrored between both controllers. If that is the case, is he > saying that for every write to the active controlle,r a second write issued > on the passive controller to keep the cache mirrored? > >He''s talking about multipathing, he just has no clue what he''s talking about. He specifically calls out applications that are specifically used for multipathing. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100405/39bb8bd9/attachment.html>