Hi, folks, testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I pulled one drive (11-drive, one hot spare pool), and it resilvered with the hot spare. zpool status -x shows me state: DEGRADED status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'. see: http://zfsonlinux.org/msg/ZFS-8000-4J scan: resilvered 1.91T in 29h33m with 0 errors on Tue Jun 11 15:45:59 2019 config: NAME STATE READ WRITE CKSUM export1 DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 sda ONLINE 0 0 0 spare-1 DEGRADED 0 0 0 sdb UNAVAIL 0 0 0 sdl ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0 sdg ONLINE 0 0 0 sdh ONLINE 0 0 0 sdi ONLINE 0 0 0 sdj ONLINE 0 0 0 sdk ONLINE 0 0 0 spares sdl INUSE currently in use but when I try zpool replace export1 /dev/sdb1, it says, nope, invalid vdev specification use '-f' to override the following errors: /dev/sdb1 is part of active pool 'export1' Any idea what I'm doing wrong? mark
mark wrote:> Hi, folks, > > > testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I > pulled one drive (11-drive, one hot spare pool), and it resilvered with > the hot spare. zpool status -x shows me state: DEGRADED > status: One or more devices could not be used because the label is missing > or invalid. Sufficient replicas exist for the pool to continue functioning > in a degraded state. action: Replace the device using 'zpool replace'. > see: http://zfsonlinux.org/msg/ZFS-8000-4J > scan: resilvered 1.91T in 29h33m with 0 errors on Tue Jun 11 15:45:59 2019 > config: > > > NAME STATE READ WRITE CKSUM > export1 DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 > 0 > sda ONLINE 0 0 0 spare-1 DEGRADED 0 0 0 sdb > UNAVAIL 0 0 0 > sdl ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd > ONLINE 0 0 0 > sde ONLINE 0 0 0 sdf ONLINE 0 0 0 sdg > ONLINE 0 0 0 > sdh ONLINE 0 0 0 sdi ONLINE 0 0 0 sdj > ONLINE 0 0 0 > sdk ONLINE 0 0 0 spares sdl INUSEcurrently in> use > > but when I try zpool replace export1 /dev/sdb1, it says, nope, invalid > vdev specification use '-f' to override the following errors: /dev/sdb1 is > part of active pool 'export1' > > Any idea what I'm doing wrong? > >Never mind. More googling, with different search terms, showed me that in this case, I had to use zpool online export1 /dev/sdb1. I would have thought that zfs would undersand this automattically, and not need me to tell it this, but.... mark
mark wrote:> mark wrote: >> >> testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I >> pulled one drive (11-drive, one hot spare pool), and it resilvered with >> the hot spare. zpool status -x shows me state: DEGRADED status: One or >> more devices could not be used because the label is missing or invalid. >> Sufficient replicas exist for the pool to continue functioning >> in a degraded state. action: Replace the device using 'zpool replace'. >> see: http://zfsonlinux.org/msg/ZFS-8000-4J >> scan: resilvered 1.91T in 29h33m with 0 errors on Tue Jun 11 15:45:59 >> 2019 config:<snip>>> Never mind. More googling, with different search terms, showed me that in > this case, I had to use zpool online export1 /dev/sdb1.<snip> Ok, now, either zpool's confused, or I am. As I said, I did the online, and now I have this: export1 ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 sda ONLINE 0 0 0 spare-1 ONLINE 0 0 0 sdb ONLINE 0 0 0 sdl ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0 sdg ONLINE 0 0 0 sdh ONLINE 0 0 0 sdi ONLINE 0 0 0 sdj ONLINE 0 0 0 sdk ONLINE 0 0 0 spares sdl INUSE currently in use 1. Why is sdl there with sdb under "spare-1", and listed as "online", while it's listed again under "spares", and "currently in use"? Do I have to issue another manual command, to tell it to drop it out of the pool, and let it go back to being *just* a spare? mark
try, zpool replace export1 sdb sdl but it says the spare is already in use, so I'm not sure why the resilver isn't already in progress. you might have to remove sdl from the spares list before you can use it in a replace. On Fri, Jun 14, 2019 at 9:03 AM mark <m.roth at 5-cent.us> wrote:> Hi, folks, > > testing zfs. I'd created a zpoolz2, ran a large backup onto it. Then I > pulled one drive (11-drive, one hot spare pool), and it resilvered with > the hot spare. zpool status -x shows me > state: DEGRADED > status: One or more devices could not be used because the label is missing > or > invalid. Sufficient replicas exist for the pool to continue > functioning in a degraded state. > action: Replace the device using 'zpool replace'. > see: http://zfsonlinux.org/msg/ZFS-8000-4J > scan: resilvered 1.91T in 29h33m with 0 errors on Tue Jun 11 15:45:59 > 2019 > config: > > NAME STATE READ WRITE CKSUM > export1 DEGRADED 0 0 0 > raidz2-0 DEGRADED 0 0 0 > sda ONLINE 0 0 0 > spare-1 DEGRADED 0 0 0 > sdb UNAVAIL 0 0 0 > sdl ONLINE 0 0 0 > sdc ONLINE 0 0 0 > sdd ONLINE 0 0 0 > sde ONLINE 0 0 0 > sdf ONLINE 0 0 0 > sdg ONLINE 0 0 0 > sdh ONLINE 0 0 0 > sdi ONLINE 0 0 0 > sdj ONLINE 0 0 0 > sdk ONLINE 0 0 0 > spares > sdl INUSE currently in use > > but when I try zpool replace export1 /dev/sdb1, it says, nope, > invalid vdev specification > use '-f' to override the following errors: > /dev/sdb1 is part of active pool 'export1' > > Any idea what I'm doing wrong? > > mark > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > https://lists.centos.org/mailman/listinfo/centos >-- -john r pierce recycling used bits in santa cruz
Apparently Analagous Threads
- zfs
- Was, Re: raid 5 install, is ZFS
- Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2
- Recovering from kernel panic / reboot cycle importing pool.
- Slow RAID Check/high %iowait during check after updgrade from CentOS 6.5 -> CentOS 7.2