Hello,
1st off, I am using ZFS under FreeBSD 7.0. Forgive me if this is the
wrong place (I do plan to post to FreeBSD as well, but this seems to be
more of a ZFS related question).
I am using a 3ware 9690SA-8E and two IBM EXP3000 chasis. One issue is the
3ware card isn''t hard setting the enclosure IDs on the chasis (and I
have
a ticket on that with them) but that isn''t why I am posting this, that
is
how I discovered this.
I have a RAID 10 configuration as follows (da0-da11 is one encl and
da12-da23 is the other)
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror ONLINE 0 0 0
da0 ONLINE 0 0 0
da12 ONLINE 0 0 0
mirror ONLINE 0 0 0
da1 ONLINE 0 0 0
da13 ONLINE 0 0 0
mirror ONLINE 0 0 0
da2 ONLINE 0 0 0
da14 ONLINE 0 0 0
mirror ONLINE 0 0 0
da3 ONLINE 0 0 0
da15 ONLINE 0 0 0
mirror ONLINE 0 0 0
da4 ONLINE 0 0 0
da16 ONLINE 0 0 0
mirror ONLINE 0 0 0
da5 ONLINE 0 0 0
da17 ONLINE 0 0 0
mirror ONLINE 0 0 0
da6 ONLINE 0 0 0
da18 ONLINE 0 0 0
mirror ONLINE 0 0 0
da7 ONLINE 0 0 0
da19 ONLINE 0 0 0
mirror ONLINE 0 0 0
da8 ONLINE 0 0 0
da20 ONLINE 0 0 0
mirror ONLINE 0 0 0
da9 ONLINE 0 0 0
da21 ONLINE 0 0 0
mirror ONLINE 0 0 0
da10 ONLINE 0 0 0
da22 ONLINE 0 0 0
spares
da11 AVAIL
da23 AVAIL
I can cause each enclosure to assume either encl 0 or encl 1 (thus
flip-flopping da0-da11 with drives of da12-da23 and visversa. The pool
will come up fine either way (even though the devs move around). Which is
great. If I bring down one of the enclosures, it comes up on da0-da11
fine. If I bring that down and bring up the opposite, da0-da11 come up
failed and the rest unavailable (since the other encl is down.
My question is (finally, sorry)...it appears ZFS doesn''t care if I flip
the sides of the mirror around but always, the enclosure that was
origionally encl 1 (orig set up da12-da23) will never come up by
itself...why is that if this encl can assume 0 and the other assume 1 and
the zfs pool will come up that way?
Thanks!
Weldon
On Tue, Jun 24, 2008 at 1:42 PM, Weldon S Godfrey 3 <weldon at excelsus.com> wrote:> My question is (finally, sorry)...it appears ZFS doesn''t care if I flip > the sides of the mirror around but always, the enclosure that was > origionally encl 1 (orig set up da12-da23) will never come up by > itself...why is that if this encl can assume 0 and the other assume 1 and > the zfs pool will come up that way?Are you doing a zfs export / zfs import between taking the enclosures down and bringing them back up? -B -- Brandon High bhigh at freaks.com "The good is the enemy of the best." - Nietzsche
No, I was trying to simulate cabinate failure to see if it can recover. Thanks! If memory serves me right, sometime around 12:13am, Brandon High told me: On Tue, Jun 24, 2008 at 1:42 PM, Weldon S Godfrey 3 <weldon at excelsus.com> wrote:> My question is (finally, sorry)...it appears ZFS doesn''t care if I flip > the sides of the mirror around but always, the enclosure that was > origionally encl 1 (orig set up da12-da23) will never come up by > itself...why is that if this encl can assume 0 and the other assume 1 and > the zfs pool will come up that way?Are you doing a zfs export / zfs import between taking the enclosures down and bringing them back up? -B -- Brandon High bhigh at freaks.com "The good is the enemy of the best." - Nietzsche
I meant to add that the devs come up Failure (corrupted data) when the the enclosure that was enclosure 1 when the RAID10 was created. Again, if enclosure 1 is up by itself, it comes up When both enclosures come up, no matter which is 0 and which is 1, it comes up, with no failed/corrupted drives. thanks! If memory serves me right, sometime around 12:13am, Brandon High told me: On Tue, Jun 24, 2008 at 1:42 PM, Weldon S Godfrey 3 <weldon at excelsus.com> wrote:> My question is (finally, sorry)...it appears ZFS doesn''t care if I flip > the sides of the mirror around but always, the enclosure that was > origionally encl 1 (orig set up da12-da23) will never come up by > itself...why is that if this encl can assume 0 and the other assume 1 and > the zfs pool will come up that way?Are you doing a zfs export / zfs import between taking the enclosures down and bringing them back up? -B -- Brandon High bhigh at freaks.com "The good is the enemy of the best." - Nietzsche
Thanks Miles Nordin <carton at Ivy.NET> for pointing out the confusion in my previous post. I''m sorry, the 4th line was meant to read if enclosure 0 comes up by itself (the one that was da0-da11 origionally), it is okay (not encl 1) Trying to be clearer: the confusing point for me is that I can cause the ID #s of the enclosures to move around (1 for 0, 0 for 1) and it works okay. If that is true, then it seems ZFS might be able to handle it if the drives IDs change when 1 enclosure is down. I understand if exported, then imported this is okay. If memory serves me right, sometime around 12:22pm, Weldon S Godfrey 3 told me: I meant to add that the devs come up Failure (corrupted data) when the the enclosure that was enclosure 1 when the RAID10 was created. Again, if enclosure 1 is up by itself, it comes up When both enclosures come up, no matter which is 0 and which is 1, it comes up, with no failed/corrupted drives. thanks! If memory serves me right, sometime around 12:13am, Brandon High told me: On Tue, Jun 24, 2008 at 1:42 PM, Weldon S Godfrey 3 <weldon at excelsus.com> wrote:> My question is (finally, sorry)...it appears ZFS doesn''t care if I flip > the sides of the mirror around but always, the enclosure that was > origionally encl 1 (orig set up da12-da23) will never come up by > itself...why is that if this encl can assume 0 and the other assume 1 and > the zfs pool will come up that way?Are you doing a zfs export / zfs import between taking the enclosures down and bringing them back up? -B -- Brandon High bhigh at freaks.com "The good is the enemy of the best." - Nietzsche _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks to Brandon High <bhigh at freaks.com> and I am sorry for my thickness and having an assumption I could not export a faulted pool. After failure, I exported then imported with zpool cmd and it came back. Thanks! Weldon If memory serves me right, sometime around Wednesday, Weldon S Godfrey 3...: Thanks Miles Nordin <carton at Ivy.NET> for pointing out the confusion in my previous post. I''m sorry, the 4th line was meant to read if enclosure 0 comes up by itself (the one that was da0-da11 origionally), it is okay (not encl 1) Trying to be clearer: the confusing point for me is that I can cause the ID #s of the enclosures to move around (1 for 0, 0 for 1) and it works okay. If that is true, then it seems ZFS might be able to handle it if the drives IDs change when 1 enclosure is down. I understand if exported, then imported this is okay. If memory serves me right, sometime around 12:22pm, Weldon S Godfrey 3 told me: I meant to add that the devs come up Failure (corrupted data) when the the enclosure that was enclosure 1 when the RAID10 was created. Again, if enclosure 1 is up by itself, it comes up When both enclosures come up, no matter which is 0 and which is 1, it comes up, with no failed/corrupted drives. thanks! If memory serves me right, sometime around 12:13am, Brandon High told me: On Tue, Jun 24, 2008 at 1:42 PM, Weldon S Godfrey 3 <weldon at excelsus.com> wrote:> My question is (finally, sorry)...it appears ZFS doesn''t care if I flip > the sides of the mirror around but always, the enclosure that was > origionally encl 1 (orig set up da12-da23) will never come up by > itself...why is that if this encl can assume 0 and the other assume 1 and > the zfs pool will come up that way?Are you doing a zfs export / zfs import between taking the enclosures down and bringing them back up? -B -- Brandon High bhigh at freaks.com "The good is the enemy of the best." - Nietzsche _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss