similar to: zfs destroy -f and dataset is busy?

Displaying 20 results from an estimated 9000 matches similar to: "zfs destroy -f and dataset is busy?"

2010 Jul 25
4
zpool destroy causes panic
I'm trying to destroy a zfs array which I recently created. It contains nothing of value. # zpool status pool: storage state: ONLINE status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'.
2007 Dec 12
0
Degraded zpool won''t online disk device, instead resilvers spare
I''ve got a zpool that has 4 raidz2 vdevs each with 4 disks (750GB), plus 4 spares. At one point 2 disks failed (in different vdevs). The message in /var/adm/messages for the disks were ''device busy too long''. Then SMF printed this message: Nov 23 04:23:51 x.x.com EVENT-TIME: Fri Nov 23 04:23:51 EST 2007 Nov 23 04:23:51 x.x.com PLATFORM: Sun Fire X4200 M2, CSN:
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state after a drive in a raidz2 vdev has been successfully replaced. In this particular case drive c0t6d0 was failing so I ran, zpool offline home/c0t6d0 zpool replace home c0t6d0 c8t1d0 and after the resilvering finished the pool reports a degraded state. Hopefully this is incorrect. At this point is the vdev in question now has
2010 Feb 27
1
slow zfs scrub?
hi all I have a server running svn_131 and the scrub is very slow. I have a cron job for starting it every week and now it''s been running for a while, and it''s very, very slow scrub: scrub in progress for 40h41m, 12.56% done, 283h14m to go The configuration is listed below, consisting of three raidz2 groups with seven 2TB drives each. The root fs is on a pair of X25M (gen 1)
2010 Apr 14
1
Checksum errors on and after resilver
Hi all, I recently experienced a disk failure on my home server and observed checksum errors while resilvering the pool and on the first scrub after the resilver had completed. Now everything seems fine but I''m posting this to get help with calming my nerves and detect any possible future faults. Lets start with some specs. OSOL 2009.06 Intel SASUC8i (w LSI 1.30IT FW) Gigabyte
2008 May 04
3
Some bugs/inconsistencies.
Hi. I''m working on getting the most recent ZFS to the FreeBSD''s CVS. Because of the huge amount of changes, I decided to work on ZFS regression tests, so I''m more or less sure nothing broke in the meantime. (Yes, I know about ZFS testsuite, but unfortunately I wasn''t able to port it to FreeBSD, it was just too much work. I''m afraid it is too
2010 Oct 14
0
AMD/Supermicro machine - AS-2022G-URF
Sorry for the long post but I know trying to decide on hardware often want to see details about what people are using. I have the following AS-2022G-URF machine running OpenGaryIndiana[1] that I am starting to use. I successfully transferred a deduped zpool with 1.x TB of files and 60 or so zfs filesystems using mbuffer from an old 134 system with 6 drives - it ran at about 50MB/s or
2009 Jun 19
8
x4500 resilvering spare taking forever?
I''ve got a Thumper running snv_57 and a large ZFS pool. I recently noticed a drive throwing some read errors, so I did the right thing and zfs replaced it with a spare. Everything went well, but the resilvering process seems to be taking an eternity: # zpool status pool: bigpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was
2011 Nov 25
1
Recovering from kernel panic / reboot cycle importing pool.
Yesterday morning I awoke to alerts from my SAN that one of my OS disks was faulty, FMA said it was in hardware failure. By the time I got to work (1.5 hours after the email) ALL of my pools were in a degraded state, and "tank" my primary pool had kicked in two hot spares because it was so discombobulated. ------------------- EMAIL ------------------- List of faulty resources:
2007 Jul 18
1
Converting exisitng ZFS pool to MPxIO
We have a Sun v890, and I''m interested converting existing ZFS zpool from c#t#d# to MPxIO. % zpool status pool: data state: ONLINE status: ONLINE scrub: scrub completed with 0 errors on Sun Jul 15 10:58:33 2007 config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0
2013 May 24
0
zpool resource fails with incorrect error
I''m working to expand / develop on the zpool built-in type, but the zpool command is failing and Puppet''s returned stderr is not what I get if I copy/paste the command given by the debug output. # cat /etc/puppet/manifests/zpool_raidz2.pp zpool { ''tank'': ensure => present, raidz => [ ''d01 d02 d03 d04'', ''d05 d06
2010 Jan 17
1
raidz2 import, some slices, some not
I am in the middle of converting a FreeBSD 8.0-Release system to OpenSolaris b130 In order to import my stuff, the only way i knew to make it work (from testing in virtualbox) was to do this: label a bunch of drives with an EFI label by using the opensolaris live cd, then use those drives in FreeBSD to create a zpool. This worked fine. (though i did get a warning in freebsd about GPT
2009 Oct 30
1
internal scrub keeps restarting resilvering?
After several days of trying to get a 1.5TB drive to resilver and it continually restarting, I eliminated all of the snapshot-taking facilities which were enabled and 2009-10-29.14:58:41 [internal pool scrub done txg:567780] complete=0 2009-10-29.14:58:41 [internal pool scrub txg:567780] func=1 mintxg=3 maxtxg=567354 2009-10-29.16:52:53 [internal pool scrub done txg:567999] complete=0
2011 Jul 21
4
Raidz2 slow read speed (under 5MB/s)
Hello all, I''m building a file server (or just a storage that I intend to access by Workgroup from primarily Windows machines) using zfs raidz2 and openindiana 148. I will be using this to stream blu-ray movies and other media, so I will be happy if I get just 20MB/s reads, which seems like a pretty low standard considering some people are getting 100+. This is my first time with OI, and
2010 Jul 05
5
never ending resilver
Hi list, Here''s my case : pool: mypool state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 147h19m, 100.00% done, 0h0m to go config: NAME STATE READ WRITE CKSUM filerbackup13
2010 Sep 30
0
ZFS Raidz2 problem, detached drive
I have an X4500 thumper box with 48x 500gb drives setup in a a pool and split into raidz2 sets of 8 - 10 drives within the single pool. I had a failed disk with i cfgadm unconfigured and replaced no problem, but it wasn''t recognised as a Sun drive in Format and unbeknown to me someone else logged in remotely at the time and issued a zpool replace.... I corrected the system/drive
2010 Dec 05
4
Zfs ignoring spares?
Hi all I have installed a new server with 77 2TB drives in 11 7-drive RAIDz2 VDEVs, all on WD Black drives. Now, it seems two of these drives were bad, one of them had a bunch of errors, the other was very slow. After zfs offlining these and then zfs replacing them with online spares, resilver ended and I thought it''d be ok. Appearently not. Albeit the resilver succeeds, the pool status
2008 Nov 24
2
replacing disk
somehow I have issue replacing my disk. [20:09:29] root at adas: /root > zpool status mypooladas pool: mypooladas state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using ''zpool online''. see:
2010 Oct 04
3
hot spare remains in use
Hi, I had a hot spare used to replace a failed drive, but then the drive appears to be fine anyway. After clearing the error it shows that the drive was resilvered, but keeps the spare in use. zpool status pool2 pool: pool2 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool2 ONLINE 0 0 0 raidz2
2009 Oct 19
0
EON ZFS Storage 0.59.4 based on snv_124 released!
Embedded Operating system/Networking (EON), RAM based live ZFS NAS appliance is released on Genunix! Many thanks to Genunix.org for download hosting and serving the opensolaris community. EON ZFS storage is available in a 32/64-bit CIFS and Samba versions: tryitEON 64-bit x86 CIFS ISO image version 0.59.4 based on snv_124 * eon-0.594-124-64-cifs.iso * MD5: