similar to: zpool errors without fmdump or dmesg errors

Displaying 20 results from an estimated 1000 matches similar to: "zpool errors without fmdump or dmesg errors"

2009 Oct 23
7
cryptic vdev name from fmdump
This morning we got a fault management message from one of our production servers stating that a fault in one of our pools had been detected and fixed. Looking into the error using fmdump gives: fmdump -v -u 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 TIME UUID SUNW-MSG-ID Oct 22 09:29:05.3448 90ea244e-1ea9-4bd6-d2be-e4e7a021f006 FMD-8000-4M Repaired
2010 Oct 28
0
Good write, but slow read speeds over the network
Hi all, I am running Netalk on OSol snv134 on a Dell R610, 32 GB RAM server. I am experiencing different speeds when when writing to and reading from the pool. The pool itself consists of two FC LUNs that each build a vdev (no comments on that please, we discussed that already! ;) ). Now, I am having a couple of AFP clients that access this pool either via FastEthernet or even GiBitEthernet.
2009 Dec 27
7
[osol-help] zfs destroy stalls, need to hard reboot
On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach <stephan.budach at jvm.de> wrote: > Brent, > > I had known about that bug a couple of weeks ago, but that bug has been files against v111 and we''re at v130. I have also seached the ZFS part of this forum and really couldn''t find much about this issue. > > The other issue I noticed is that, as opposed to the
2012 Feb 04
2
zpool fails with panic in zio_ddt_free()
Hello all, I am not sure my original mail got through to the list (I haven''t received it back), so I attach it below. Anyhow, now I have a saved kernel crash dump of the system panicking when it tries to - I believe - deferred-release the corrupted deduped blocks which are no longer referenced by the userdata/blockpointer tree. As I previously wrote in my thread on unfixeable
2010 Feb 24
0
disks in zpool gone at the same time
Hi, Yesterday I got all my disks in two zpool disconected. They are not real disks - LUNS from StorageTek 2530 array. What could that be - a failing LSI card or a mpt driver in 2009.06? After reboot got four disks in FAILED state - zpool clear fixed things with resilvering. Here is how it started (/var/adm/messages) Feb 23 12:39:03 nexus scsi: [ID 365881 kern.info] /pci at 0,0/pci10de,5d at
2010 Jan 10
5
Repeating scrub does random fixes
I''ve been using a 5-disk raidZ for years on SXCE machine which I converted to OSOL. The only time I ever had zfs problems in SXCE was with snv_120, which was fixed. So, now I''m at OSOL snv_111b and I''m finding that scrub repairs errors on random disks. If I repeat the scrub, it will fix errors on other disks. Occasionally it runs cleanly. That it doesn''t
2011 May 10
5
Tuning disk failure detection?
We recently had a disk fail on one of our whitebox (SuperMicro) ZFS arrays (Solaris 10 U9). The disk began throwing errors like this: May 5 04:33:44 dev-zfs4 scsi: [ID 243001 kern.warning] WARNING: /pci at 0,0/pci8086,3410 at 9/pci15d9,400 at 0 (mpt_sas0): May 5 04:33:44 dev-zfs4 mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31110610 And errors for the drive were
2010 Dec 23
31
SAS/short stroking vs. SSDs for ZIL
Hi, as I have learned from the discussion about which SSD to use as ZIL drives, I stumbled across this article, that discusses short stroking for increasing IOPs on SAS and SATA drives: http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html Now, I am wondering if using a mirror of such 15k SAS drives would be a good-enough fit for a ZIL on a zpool that is mainly used for file
2010 Apr 05
3
no hot spare activation?
While testing a zpool with a different storage adapter using my "blkdev" device, I did a test which made a disk unavailable -- all attempts to read from it report EIO. I expected my configuration (which is a 3 disk test, with 2 disks in a RAIDZ and a hot spare) to work where the hot spare would automatically be activated. But I''m finding that ZFS does not behave this way
2008 Oct 08
5
Resilver hanging?
How can I diagnose why a resilver appears to be hanging at a certain percentage, seemingly doing nothing for quite a while, even though the HDD LED is lit up permanently (no apparent head seeking)? The drives in the pool are WD Raid Editions, thus have TLER and should time out on errors in just seconds. ZFS nor the syslog however were reporting any IO errors, so it weren''t the disks.
2010 Apr 04
15
Diagnosing Permanent Errors
I would like to get some help diagnosing permanent errors on my files. The machine in question has 12 1TB disks connected to an Areca raid card. I installed OpenSolaris build 134 and according to zpool history, created a pool with zpool create bigraid raidz2 c4t0d0 c4t0d1 c4t0d2 c4t0d3 c4t0d4 c4t0d5 c4t0d6 c4t0d7 c4t1d0 c4t1d1 c4t1d2 c4t1d3 I then backed up 806G of files to the machine, and had
2008 Dec 04
11
help diagnosing system hang
Hi all, First, I''ll say my intent is not to spam a bunch of lists, but after posting to opensolaris-discuss I had someone communicate with me offline that these lists would possibly be a better place to start. So here we are. For those on all three lists, sorry for the repetition. Second, this message is meant to solicit help in diagnosing the issue described below. Any hints on
2007 Mar 23
2
ZFS ontop of SVM - CKSUM errors
Hi. bash-3.00# uname -a SunOS nfs-14-2.srv 5.10 Generic_125101-03 i86pc i386 i86pc I created first zpool (stripe of 85 disks) and did some simple stress testing - everything seems almost alright (~700MB seq reads, ~430 seqential writes). Then I destroyed pool and put SVM stripe on top the same disks utilizing the fact that zfs already put EFI and s0 represents almost entire disk. The on top on
2010 Dec 17
6
copy complete zpool via zfs send/recv
Hi, I want to move all the ZFS fs from one pool to another, but I don''t want to "gain" an extra level in the folder structure on the target pool. On the source zpool I used zfs snapshot -r tank at moveTank on the root fs and I got a new snapshot in all sub fs, as expected. Now, I want to use zfs send -R tank at moveTank | zfs recv targetTank/... which would place all zfs fs
2008 Jul 06
14
confusion and frustration with zpool
I have a zpool which has grown "organically". I had a 60Gb disk, I added a 120, I added a 500, I got a 750 and sliced it and mirrored the other pieces. The 60 and the 120 are internal PATA drives, the 500 and 750 are Maxtor OneTouch USB drives. The original system I created the 60+120+500 pool on was Solaris 10 update 3, patched to use ZFS sometime last fall (November I believe). In
2010 Mar 27
14
b134 - Mirrored rpool won''t boot unless both mirrors are present
I have two 500 GB drives on my system that are attached to built-in SATA ports on my Asus M4A785-M motherboard, running in AHCI mode. If I shut down the system, remove either drive, and then try to boot the system, it will fail to boot. If I disable the splash screen, I find that it will display the SunOS banner and the hostname, but it never gets as far as the "Reading ZFS config:"
2007 Sep 08
1
zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state after a drive in a raidz2 vdev has been successfully replaced. In this particular case drive c0t6d0 was failing so I ran, zpool offline home/c0t6d0 zpool replace home c0t6d0 c8t1d0 and after the resilvering finished the pool reports a degraded state. Hopefully this is incorrect. At this point is the vdev in question now has
2010 Oct 04
8
Can I "upgrade" a striped pool of vdevs to mirrored vdevs?
Hi, once I created a zpool of single vdevs not using mirroring of any kind. Now I wonder if it''s possible to add vdevs and mirror the currently existing ones. Thanks, budy -- This message posted from opensolaris.org
2011 Jan 04
0
zpool import hangs system
Hello, I''ve been using Nexentastore Community Edition with no issues now for a while now, however last week I was going to rebuild a different system so I started to copy all the data off that to my to a raidz2 volume on me CE system. This was going fine until I noticed that they copy was stalled, as well as the entire system was non-responsive. I let it sit for several hours with no
2010 Oct 19
0
NFS/SATA lockups (svc_cots_kdup no slots free & sata port time out)
I have a Solaris 10 U8 box (142901-14) running as an NFS server with a 23 disk zpool behind it (three RAIDZ2 vdevs). We have a single Intel X-25E SSD operating as an slog ZIL device attached to a SATA port on this machine''s motherboard. The rest of the drives are in a hot-swap enclosure. Infrequently (maybe once every 4-6 weeks), the zpool on the box stops responding and although we