similar to: ZFS error handling - suggestion

Displaying 20 results from an estimated 2000 matches similar to: "ZFS error handling - suggestion"

2006 Oct 24
3
determining raidz pool configuration
Hi all, Sorry for the newbie question, but I''ve looked at the docs and haven''t been able to find an answer for this. I''m working with a system where the pool has already been configured and want to determine what the configuration is. I had thought that''d be with zpool status -v <poolname>, but it doesn''t seem to agree with the
2008 Apr 02
1
delete old zpool config?
Hi experts zpool import shows some weird config of an old zpool bash-3.00# zpool import pool: data1 id: 7539031628606861598 state: FAULTED status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-3C config: data1 UNAVAIL insufficient replicas
2008 Oct 08
1
Troubleshooting ZFS performance with SIL3124 cards
Hi! I have a problem with ZFS and most likely the SATA PCI-X controllers. I run opensolaris 2008.11 snv_98 and my hardware is Sun Netra x4200 M2 with 3 SIL3124 PCI-X with 4 eSATA ports each connected to 3 1U diskchassis which each hold 4 SATA disks manufactured by Seagate model ES.2 (500 and 750) for a total of 12 disks. Every disk has its own eSATA cable connected to the ports on the PCI-X
2008 Apr 11
0
How to replace root drive if ZFS data is on it?
Hi, Experts: A customer has X4500 and the boot drives mirrored (c5t0d0s0 and c5t4d0s0) by SVM, The ZFS uses the two other partitions on these two drives(c5t0d0s3 and c5t4d0s3). If we need to replace the disk drive c5t0d0, do we need to do anything on the ZFS (c5t0d0s3 and c5t4d0s3) first or just follow the regular boot drive replacement procedure? Below is the summary of their current ZFS
2007 Apr 11
0
raidz2 another resilver problem
Hello zfs-discuss, One of a disk started to behave strangely. Apr 11 16:07:42 thumper-9.srv sata: [ID 801593 kern.notice] NOTICE: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1: Apr 11 16:07:42 thumper-9.srv port 6: device reset Apr 11 16:07:42 thumper-9.srv scsi: [ID 107833 kern.warning] WARNING: /pci at 1,0/pci1022,7458 at 3/pci11ab,11ab at 1/disk at 6,0 (sd27): Apr 11 16:07:42 thumper-9.srv
2008 Apr 01
29
OpenSolaris ZFS NAS Setup
If it''s of interest, I''ve written up some articles on my experiences of building a ZFS NAS box which you can read here: http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/ I used CIFS to share the filesystems, but it will be a simple matter to use NFS instead: issue the command ''zfs set sharenfs=on pool/filesystem'' instead of ''zfs set
2009 Feb 04
8
Data loss bug - sidelined??
In August last year I posted this bug, a brief summary of which would be that ZFS still accepts writes to a faulted pool, causing data loss, and potentially silent data loss: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6735932 There have been no updates to the bug since September, and nobody seems to be assigned to it. Can somebody let me know what''s happening with this
2009 Dec 04
2
USB sticks show on one set of devices in zpool, different devices in format
Hello, I had snv_111b running for a while on a HP DL160G5. With two 16GB USB sticks comprising the mirrored rpool for boot. And four 1TB drives comprising another pool, pool1, for data. So that''s been working just fine for a few months. Yesterday I get it into my mind to upgrade the OS to latest, then was snv_127. That worked, and all was well. Also did an upgrade to the
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state after a drive in a raidz2 vdev has been successfully replaced. In this particular case drive c0t6d0 was failing so I ran, zpool offline home/c0t6d0 zpool replace home c0t6d0 c8t1d0 and after the resilvering finished the pool reports a degraded state. Hopefully this is incorrect. At this point is the vdev in question now has
2008 Mar 12
3
Mixing RAIDZ and RAIDZ2 zvols in the same zpool
I have a customer who has implemented the following layout: As you can see, he has mostly raidz zvols but has one raidz2 in the same zpool. What are the implications here? Is this a bad thing to do? Please elaborate. Thanks, Scott Gaspard Scott.J.Gaspard at Sun.COM > NAME STATE READ WRITE CKSUM > > chipool1 ONLINE 0 0 0 > >
2010 Feb 08
1
Big send/receive hangs on 2009.06
So, I was running my full backup last night, backing up my main data pool zp1, and it seems to have hung. Any suggestions for additional data gathering? -bash-3.2$ zpool status zp1 pool: zp1 state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using ''zpool
2009 Jun 19
8
x4500 resilvering spare taking forever?
I''ve got a Thumper running snv_57 and a large ZFS pool. I recently noticed a drive throwing some read errors, so I did the right thing and zfs replaced it with a spare. Everything went well, but the resilvering process seems to be taking an eternity: # zpool status pool: bigpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was
2010 Apr 10
21
What happens when unmirrored ZIL log device is removed ungracefully
Due to recent experiences, and discussion on this list, my colleague and I performed some tests: Using solaris 10, fully upgraded. (zpool 15 is latest, which does not have log device removal that was introduced in zpool 19) In any way possible, you lose an unmirrored log device, and the OS will crash, and the whole zpool is permanently gone, even after reboots. Using opensolaris,
2008 Mar 20
7
ZFS panics solaris while switching a volume to read-only
Hi, I just found out that ZFS triggers a kernel-panic while switching a mounted volume into read-only mode: The system is attached to a Symmetrix, all zfs-io goes through Powerpath: I ran some io-intensive stuff on /tank/foo and switched the device into read-only mode at the same time (symrdf -g bar failover -establish). ZFS went ''bam'' and triggered a Panic: WARNING: /pci at
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool. root@:/$ zpool status panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2011 Nov 05
4
ZFS Recovery: What do I try next?
I would like to pick the brains of the ZFS experts on this list: What would you do next to try and recover this zfs pool? I have a ZFS RAIDZ1 pool named bank0 that I cannot import. It was composed of 4 1.5 TiB disks. One disk is totally dead. Another had SMART errors, but using GNU ddrescue I was able to copy all the data off successfully. I have copied all 3 remaining disks as images using
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance. I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.). According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2011 Jan 29
19
multiple disk failure
Hi, I am using FreeBSD 8.2 and went to add 4 new disks today to expand my offsite storage. All was working fine for about 20min and then the new drive cage started to fail. Silly me for assuming new hardware would be fine :( The new drive cage started to fail, it hung the server and the box rebooted. After it rebooted, the entire pool is gone and in the state below. I had only written a few
2011 Jun 01
1
How to properly read "zpool iostat -v" ? ;)
Hello experts, I''ve had a lingering question for some time: when I use "zpool iostat -v" the values do not quite sum up. In the example below with a raidz2 array made of 6 drives: * the reported 33K of writes are less than two disks'' workload at this time (at 17.9K each), overall disks writes are 107.4K = 325% of 33K. * write ops sum up to 18 = 225% of 8 ops to
2010 May 18
25
Very serious performance degradation
Hi, I''m running Opensolaris 2009.06, and I''m facing a serious performance loss with ZFS ! It''s a raidz1 pool, made of 4 x 1TB SATA disks : zfs_raid ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 ONLINE 0 0 0 c7t4d0 ONLINE 0 0