Nenad Cimerman
2007-Apr-26  15:36 UTC
[zfs-discuss] Re: Re: Re: How much do we really want zpool remove?
You can - easily: # zpool export [i]mypool[/i] Then you take out one of the disks and put it into another system or a safe place. Afterwards you simply import the pool again: # zpool import [i]mypool[/i] Note - you can NOT import both disks separately, as they are both taged to belong to the same zpool. I just tried this, using files as pool-devices. But I didn''t test it with real disks/slices - but it shouldn''t make any difference. HTH, Nenad. PS: I know, the reply is pretty late... I just read this thread. This message posted from opensolaris.org
Cindy.Swearingen at Sun.COM
2007-Apr-26  18:57 UTC
[zfs-discuss] Re: Re: Re: How much do we really want zpool remove?
Nenad,
I''ve seen this solution offered before, but I would not recommend this
except as a last resort, unless you didn''t care about the health of
the original pool.
Removing a device from an exported pool, could be very bad, depending
on the pool''s redundancy. You might not get your all data back unless
you put the disk back.
See the output below.
Definitely not for a pool and data on a production system.
Cindy
# zpool status epool
   pool: epool
  state: ONLINE
  scrub: none requested
config:
         NAME        STATE     READ WRITE CKSUM
         epool       ONLINE       0     0     0
           mirror    ONLINE       0     0     0
             c7t6d0  ONLINE       0     0     0
             c7t5d0  ONLINE       0     0     0
             c5t5d0  ONLINE       0     0     0
             c6t6d0  ONLINE       0     0     0
             c6t5d0  ONLINE       0     0     0
             c6t7d0  ONLINE       0     0     0
errors: No known data errors
# cfgadm | grep c6t7d0
sata4/7::dsk/c6t7d0            disk         connected    configured   ok
# zpool export epool
# cfgadm -c unconfigure sata4/7
Unconfigure the device at: /devices/pci at 2,0/pci1022,7458 at 7/pci11ab,11ab at
1:7
This operation will suspend activity on the SATA device
Continue (yes/no)? y
# zpool import epool
# zpool status epool
   pool: epool
  state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas 
exist for
         the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using ''zpool
online''.
    see: http://www.sun.com/msg/ZFS-8000-D3
  scrub: resilver completed with 0 errors on Thu Apr 26 11:38:21 2007
config:
         NAME        STATE     READ WRITE CKSUM
         epool       DEGRADED     0     0     0
           mirror    DEGRADED     0     0     0
             c7t6d0  ONLINE       0     0     0
             c7t5d0  ONLINE       0     0     0
             c5t5d0  ONLINE       0     0     0
             c6t6d0  ONLINE       0     0     0
             c6t5d0  ONLINE       0     0     0
             c6t7d0  UNAVAIL      0     0     0  cannot open
errors: No known data errors
#
Nenad Cimerman wrote:> You can - easily:
> # zpool export [i]mypool[/i]
> Then you take out one of the disks and put it into another system or a safe
place.
> Afterwards you simply import the pool again:
> # zpool import [i]mypool[/i]
> 
> Note - you can NOT import both disks separately, as they are both taged to
belong to the same zpool.
> 
> I just tried this, using files as pool-devices. But I didn''t test
it with real disks/slices - but it shouldn''t make any difference.
> 
> HTH,
> Nenad.
> 
> PS: I know, the reply is pretty late... I just read this thread.
>  
>  
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Robert Milkowski
2007-Apr-26  22:19 UTC
[zfs-discuss] Re: Re: Re: How much do we really want zpool remove?
Hello Cindy,
Thursday, April 26, 2007, 8:57:54 PM, you wrote:
CSSC> Nenad,
CSSC> I''ve seen this solution offered before, but I would not
recommend this
CSSC> except as a last resort, unless you didn''t care about the
health of
CSSC> the original pool.
CSSC> Removing a device from an exported pool, could be very bad, depending
CSSC> on the pool''s redundancy. You might not get your all data back
unless
CSSC> you put the disk back.
CSSC> See the output below.
CSSC> Definitely not for a pool and data on a production system.
CSSC> Cindy
CSSC> # zpool status epool
CSSC>    pool: epool
CSSC>   state: ONLINE
CSSC>   scrub: none requested
CSSC> config:
CSSC>          NAME        STATE     READ WRITE CKSUM
CSSC>          epool       ONLINE       0     0     0
CSSC>            mirror    ONLINE       0     0     0
CSSC>              c7t6d0  ONLINE       0     0     0
CSSC>              c7t5d0  ONLINE       0     0     0
CSSC>              c5t5d0  ONLINE       0     0     0
CSSC>              c6t6d0  ONLINE       0     0     0
CSSC>              c6t5d0  ONLINE       0     0     0
CSSC>              c6t7d0  ONLINE       0     0     0
CSSC> errors: No known data errors
CSSC> # cfgadm | grep c6t7d0
CSSC> sata4/7::dsk/c6t7d0            disk         connected    configured  
ok
CSSC> # zpool export epool
CSSC> # cfgadm -c unconfigure sata4/7
CSSC> Unconfigure the device at:
CSSC> /devices/pci at 2,0/pci1022,7458 at 7/pci11ab,11ab at 1:7
CSSC> This operation will suspend activity on the SATA device
CSSC> Continue (yes/no)? y
CSSC> # zpool import epool
CSSC> # zpool status epool
CSSC>    pool: epool
CSSC>   state: DEGRADED
CSSC> status: One or more devices could not be opened.  Sufficient replicas 
CSSC> exist for
CSSC>          the pool to continue functioning in a degraded state.
CSSC> action: Attach the missing device and online it using ''zpool
online''.
CSSC>     see: http://www.sun.com/msg/ZFS-8000-D3
CSSC>   scrub: resilver completed with 0 errors on Thu Apr 26 11:38:21 2007
CSSC> config:
CSSC>          NAME        STATE     READ WRITE CKSUM
CSSC>          epool       DEGRADED     0     0     0
CSSC>            mirror    DEGRADED     0     0     0
CSSC>              c7t6d0  ONLINE       0     0     0
CSSC>              c7t5d0  ONLINE       0     0     0
CSSC>              c5t5d0  ONLINE       0     0     0
CSSC>              c6t6d0  ONLINE       0     0     0
CSSC>              c6t5d0  ONLINE       0     0     0
CSSC>              c6t7d0  UNAVAIL      0     0     0  cannot open
CSSC> errors: No known data errors
CSSC> #
What''s wrong with above? It''s perfectly normal and in such a
config
it''s definitely safe (there''re still 5 copies of valid data).
-- 
Best regards,
 Robert                            mailto:rmilkowski at task.gda.pl
                                       http://milek.blogspot.com
Robert Milkowski
2007-Apr-26  22:39 UTC
[zfs-discuss] Re: Re: Re: How much do we really want zpool remove?
Hello Cindy,
Friday, April 27, 2007, 1:28:05 AM, you wrote:
CSSC> Hi Robert,
CSSC> I just want to be clear that you can''t just remove a disk from
an
CSSC> exported pool without penalty upon import:
CSSC> - If the underlying redundancy of the original pool doesn''t
support
CSSC> it and you lose data
Yep, as with any other SW or HW RAID if you do it uncleanly.
As I understand it''s being worked to allow to cleanly remove a device
from a pool (redundant or not).
CSSC> - Some penalty exists even for redundant pools, which is running
CSSC> in DEGRADED mode until you put the disk back
What penalty? It''s just a warning then one device is missing. Nothing
else really happens and you should be able to just remove unavail
device and get rid of that message if you want.
-- 
Best regards,
 Robert                            mailto:rmilkowski at task.gda.pl
                                       http://milek.blogspot.com
Cindy.Swearingen at Sun.COM
2007-Apr-26  23:28 UTC
[zfs-discuss] Re: Re: Re: How much do we really want zpool remove?
Hi Robert, I just want to be clear that you can''t just remove a disk from an exported pool without penalty upon import: - If the underlying redundancy of the original pool doesn''t support it and you lose data - Some penalty exists even for redundant pools, which is running in DEGRADED mode until you put the disk back Cindy Robert Milkowski wrote:> Hello Cindy, > > Thursday, April 26, 2007, 8:57:54 PM, you wrote: > > CSSC> Nenad, > > CSSC> I''ve seen this solution offered before, but I would not recommend this > CSSC> except as a last resort, unless you didn''t care about the health of > CSSC> the original pool. > > CSSC> Removing a device from an exported pool, could be very bad, depending > CSSC> on the pool''s redundancy. You might not get your all data back unless > CSSC> you put the disk back. > > CSSC> See the output below. > > CSSC> Definitely not for a pool and data on a production system. > > CSSC> Cindy > > CSSC> # zpool status epool > CSSC> pool: epool > CSSC> state: ONLINE > CSSC> scrub: none requested > CSSC> config: > > CSSC> NAME STATE READ WRITE CKSUM > CSSC> epool ONLINE 0 0 0 > CSSC> mirror ONLINE 0 0 0 > CSSC> c7t6d0 ONLINE 0 0 0 > CSSC> c7t5d0 ONLINE 0 0 0 > CSSC> c5t5d0 ONLINE 0 0 0 > CSSC> c6t6d0 ONLINE 0 0 0 > CSSC> c6t5d0 ONLINE 0 0 0 > CSSC> c6t7d0 ONLINE 0 0 0 > > CSSC> errors: No known data errors > CSSC> # cfgadm | grep c6t7d0 > CSSC> sata4/7::dsk/c6t7d0 disk connected configured ok > CSSC> # zpool export epool > CSSC> # cfgadm -c unconfigure sata4/7 > CSSC> Unconfigure the device at: > CSSC> /devices/pci at 2,0/pci1022,7458 at 7/pci11ab,11ab at 1:7 > CSSC> This operation will suspend activity on the SATA device > CSSC> Continue (yes/no)? y > CSSC> # zpool import epool > CSSC> # zpool status epool > CSSC> pool: epool > CSSC> state: DEGRADED > CSSC> status: One or more devices could not be opened. Sufficient replicas > CSSC> exist for > CSSC> the pool to continue functioning in a degraded state. > CSSC> action: Attach the missing device and online it using ''zpool online''. > CSSC> see: http://www.sun.com/msg/ZFS-8000-D3 > CSSC> scrub: resilver completed with 0 errors on Thu Apr 26 11:38:21 2007 > CSSC> config: > > CSSC> NAME STATE READ WRITE CKSUM > CSSC> epool DEGRADED 0 0 0 > CSSC> mirror DEGRADED 0 0 0 > CSSC> c7t6d0 ONLINE 0 0 0 > CSSC> c7t5d0 ONLINE 0 0 0 > CSSC> c5t5d0 ONLINE 0 0 0 > CSSC> c6t6d0 ONLINE 0 0 0 > CSSC> c6t5d0 ONLINE 0 0 0 > CSSC> c6t7d0 UNAVAIL 0 0 0 cannot open > > CSSC> errors: No known data errors > CSSC> # > > What''s wrong with above? It''s perfectly normal and in such a config > it''s definitely safe (there''re still 5 copies of valid data). >
Rainer Heilke
2007-Apr-27  23:10 UTC
[zfs-discuss] Re: Re: Re: Re: How much do we really want zpool remove?
> Nenad, > > I''ve seen this solution offered before, but I would > not recommend this > except as a last resort, unless you didn''t care about > the health of > the original pool.This is emphatically not what was being requested by me, in fact. I agree, I would be highly suspicious of the data''s integrity depending upon the zpool structure. There are a couple other things to consider. First, this requires exporting the whole pool, which automatically assumes an outage is acceptable. Second, it assumes the other half of the mirror will be used elsewhere as in "on another system". This is also not a fair automatic assumption to make. The comments I made in this thread (I can''t speak for others) were that the broken-off mirror may be used safely elsewhere, with the understanding this could even be on the same system, perhaps for archival reference or some-such. "Elsewhere" does not automatically imply "on another server". My apologies if I did not make this clear. Rainer This message posted from opensolaris.org
Cindy.Swearingen at Sun.COM
2007-Apr-30  16:38 UTC
[zfs-discuss] Re: Re: Re: Re: How much do we really want zpool remove?
Hi Rainer, This is a long thread and I wasn''t commenting on your previous replies regarding mirror manipulation. If I was, I would have done so directly. :-) I saw the export-a-pool-to-remove-a-disk-solution described in a Sun doc. My point and (I agree with your points below) is that making a pool unavailable to remove a disk is not a good administrative practice if you are unclear of the impact on the overall health of the original pool. Currently, if what you really want is to remove a disk from a redundant pool, then better options are zpool detach or zpool replace. Cindy Rainer Heilke wrote:>>Nenad, >> >>I''ve seen this solution offered before, but I would >>not recommend this >>except as a last resort, unless you didn''t care about >>the health of >>the original pool. > > > This is emphatically not what was being requested by me, in fact. I agree, I would be highly suspicious of the data''s integrity depending upon the zpool structure. > > There are a couple other things to consider. First, this requires exporting the whole pool, which automatically assumes an outage is acceptable. Second, it assumes the other half of the mirror will be used elsewhere as in "on another system". This is also not a fair automatic assumption to make. The comments I made in this thread (I can''t speak for others) were that the broken-off mirror may be used safely elsewhere, with the understanding this could even be on the same system, perhaps for archival reference or some-such. "Elsewhere" does not automatically imply "on another server". My apologies if I did not make this clear. > > Rainer > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Rainer Heilke
2007-Apr-30  23:39 UTC
[zfs-discuss] Re: Re: Re: Re: Re: How much do we really want zpool remove?
> Hi Rainer, > > This is a long thread and I wasn''t commenting on your > previous > replies regarding mirror manipulation. If I was, I > would have done > so directly. :-)Yes, I realize. I did the response on your post because I was agreeing with you. :-) I was just extending your comment by indicating the idea proposed before your post was not acceptable, and didn''t address the core functionality missing. Sorry if my placing of my post confused things. Rainer This message posted from opensolaris.org