Lutz Schumann
2010-Apr-26 21:41 UTC
[zfs-discuss] Spare in use althought disk is healthy ?
Hello list, a pool shows some strange status: volume: zfs01vol state: ONLINE scrub: scrub completed after 1h21m with 0 errors on Sat Apr 24 04:22:38 2010 config: NAME STATE READ WRITE CKSUM zfs01vol ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t4d0 ONLINE 0 0 0 c3t4d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t5d0 ONLINE 0 0 0 c3t5d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t8d0 ONLINE 0 0 0 c3t8d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t9d0 ONLINE 0 0 0 c3t9d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t12d0 ONLINE 0 0 0 spare ONLINE 0 0 0 c3t12d0 ONLINE 0 0 0 c3t21d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t13d0 ONLINE 0 0 0 c3t13d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t16d0 ONLINE 0 0 0 c3t16d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t17d0 ONLINE 0 0 0 c3t17d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t20d0 ONLINE 0 0 0 c3t20d0 ONLINE 0 0 0 logs ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 mirror ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c3t1d0 ONLINE 0 0 0 cache c0t0d0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 spares c2t21d0 AVAIL c3t21d0 INUSE currently in use The spare is in use, altought there is no failed disk in the pool. Can anyone "interpret" this ? Is this a bug ? Thanks, Robert -- This message posted from opensolaris.org
On 04/27/10 09:41 AM, Lutz Schumann wrote:> Hello list, > > a pool shows some strange status: > > volume: zfs01vol > state: ONLINE > scrub: scrub completed after 1h21m with 0 errors on Sat Apr 24 04:22:38 >> mirror ONLINE 0 0 0 > c2t12d0 ONLINE 0 0 0 > spare ONLINE 0 0 0 > c3t12d0 ONLINE 0 0 0 > c3t21d0 ONLINE 0 0 0 >> spares > c2t21d0 AVAIL > c3t21d0 INUSE currently in use > > The spare is in use, altought there is no failed disk in the pool. > > Can anyone "interpret" this ? Is this a bug ? > >Was the drive c3t12d0 replaced or faulty at some point? You should be able to detach the spare. -- Ian.
Cindy Swearingen
2010-Apr-26 22:33 UTC
[zfs-discuss] Spare in use althought disk is healthy ?
Hi Lutz, You can try the following commands to see what happened: 1. Someone else replaced the disk with a spare, which would be recorded in this command: # zpool history -l zfs01vol 2. If the disk had some transient outage then maybe the spare kicked in. Use the following command to see if something happened to this disk: # fmdump -eV This command might produce a lot of output, but look for c3t12d0 occurrences. 3. If the c3t12d0 disk is okay, try detaching the spare back to the spare pool like this: # zpool detach zfs01vol c3t21d0 Thanks, Cindy On 04/26/10 15:41, Lutz Schumann wrote:> Hello list, > > a pool shows some strange status: > > volume: zfs01vol > state: ONLINE > scrub: scrub completed after 1h21m with 0 errors on Sat Apr 24 04:22:38 > 2010 > config: > > NAME STATE READ WRITE CKSUM > zfs01vol ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c2t4d0 ONLINE 0 0 0 > c3t4d0 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c2t5d0 ONLINE 0 0 0 > c3t5d0 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c2t8d0 ONLINE 0 0 0 > c3t8d0 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c2t9d0 ONLINE 0 0 0 > c3t9d0 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c2t12d0 ONLINE 0 0 0 > spare ONLINE 0 0 0 > c3t12d0 ONLINE 0 0 0 > c3t21d0 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c2t13d0 ONLINE 0 0 0 > c3t13d0 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c2t16d0 ONLINE 0 0 0 > c3t16d0 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c2t17d0 ONLINE 0 0 0 > c3t17d0 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c2t20d0 ONLINE 0 0 0 > c3t20d0 ONLINE 0 0 0 > logs ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c2t0d0 ONLINE 0 0 0 > c3t0d0 ONLINE 0 0 0 > mirror ONLINE 0 0 0 > c2t1d0 ONLINE 0 0 0 > c3t1d0 ONLINE 0 0 0 > cache > c0t0d0 ONLINE 0 0 0 > c0t1d0 ONLINE 0 0 0 > c0t2d0 ONLINE 0 0 0 > spares > c2t21d0 AVAIL > c3t21d0 INUSE currently in use > > The spare is in use, altought there is no failed disk in the pool. > > Can anyone "interpret" this ? Is this a bug ? > > Thanks, > Robert
Lutz Schumann
2010-May-02 16:55 UTC
[zfs-discuss] Spare in use althought disk is healthy ?
Hello, thanks for the feedback and sorry for the delay in answering. I checked the log and the fmadm. It seems the log does not show changes, however fmadm shows: Apr 23 2010 18:32:26.363495457 ereport.io.scsi.cmd.disk.dev.rqs.derr Apr 23 2010 18:32:26.363482031 ereport.io.scsi.cmd.disk.recovered Same thing for the other disk: Apr 21 2010 15:02:24.117303285 ereport.io.scsi.cmd.disk.dev.rqs.derr Apr 21 2010 15:02:24.117300448 ereport.io.scsi.cmd.disk.recovered It seems there is a VERY short temp error. I will try to detach this. Is this a Bug ? Robert -- This message posted from opensolaris.org
Cindy Swearingen
2010-May-03 17:49 UTC
[zfs-discuss] Spare in use althought disk is healthy ?
Hi Robert, Could be a bug. What kind of system and disks are reporting these errors? Thanks, Cindy On 05/02/10 10:55, Lutz Schumann wrote:> Hello, > > thanks for the feedback and sorry for the delay in answering. > > I checked the log and the fmadm. It seems the log does not show changes, however fmadm shows: > > Apr 23 2010 18:32:26.363495457 ereport.io.scsi.cmd.disk.dev.rqs.derr > Apr 23 2010 18:32:26.363482031 ereport.io.scsi.cmd.disk.recovered > > Same thing for the other disk: > > Apr 21 2010 15:02:24.117303285 ereport.io.scsi.cmd.disk.dev.rqs.derr > Apr 21 2010 15:02:24.117300448 ereport.io.scsi.cmd.disk.recovered > > It seems there is a VERY short temp error. > > I will try to detach this. > > Is this a Bug ? > Robert