similar to: zpool detech hangs causes other zpool commands, format, df etc.. to hang

Displaying 20 results from an estimated 100 matches similar to: "zpool detech hangs causes other zpool commands, format, df etc.. to hang"

2010 Jun 28
23
zpool import hangs indefinitely (retry post in parts; too long?)
Now at 36 hours since zdb process start and: PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209 Idling at 0.2% processor for nearly the past 24 hours... feels very stuck. Thoughts on how to determine where and why? -- This message posted from opensolaris.org
2010 Jan 12
0
dmu_zfetch_find - lock contention?
Hi, I have a mysql instance which if I point more load towards it it suddenly gets 100% in SYS as shown below. It can work fine for an hour but eventually it gets to a jump from 5-15% of CPU utilization to 100% in SYS as show in mpstat output below: # prtdiag | head System Configuration: SUN MICROSYSTEMS SUN FIRE X4170 SERVER BIOS Configuration: American Megatrends Inc. 07060215
2011 Jan 18
4
Zpool Import Hanging
Hi All, I believe this has been asked before, but I wasn?t able to find too much information about the subject. Long story short, I was moving data around on a storage zpool of mine and a zfs destroy <filesystem> hung (or so I thought). This pool had dedup turned on at times while imported as well; it?s running on a Nexenta Core 3.0.1 box (snv_134f). The first time the machine was
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
After multiple power outages caused by storms coming through, I can no longer access /dev/zvol/dsk/poolname, which are hold l2arc and slog devices in another pool I don''t think this is related, since I the pools are ofline pending access to the volumes. I tried running find /dev/zvol/dsk/poolname -type f and here is the stack, hopefully this someone a hint at what the issue is, I have
2007 Nov 25
2
Corrupted pool
Howdy, We are using ZFS on one of our Solaris 10 servers, and the box paniced this evening with the following stack trace: Nov 24 04:03:35 foo unix: [ID 100000 kern.notice] Nov 24 04:03:35 foo genunix: [ID 802836 kern.notice] fffffe80004a14d0 fffffffffb9b49f3 () Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1550 zfs:space_map_remove+239 () Nov 24 04:03:35 foo genunix: [ID
2008 Apr 01
4
panic ... recursive mutex_enter installing a snv_85 domU on Linux dom0
While attempting to install snv_85 as a domU, Debian etch as the dom0, the kernel panics before the rest of the system starts to boot. Currently I''m using the sid releases of xen 3.2 found in Debian sid. I stopped using 3.0.3, as it appears I''m unable to use a ramdisk setting. # dmesg | grep -i mem ... Memory: 1903728k/1949252k available (1623k kernel code, 36208k reserved,
2007 Aug 26
3
Kernel panic receiving incremental snapshots
Before I open a new case with Sun, I am wondering if anyone has seen this kernel panic before? It happened on an X4500 running Sol10U3 while it was receiving incremental snapshot updates. Thanks. Aug 25 17:01:50 ldasdata6 ^Mpanic[cpu0]/thread=fffffe857d53f7a0: Aug 25 17:01:50 ldasdata6 genunix: [ID 895785 kern.notice] dangling dbufs (dn=fffffe82a3532d10, dbuf=fffffe8b4e338b90) Aug 25 17:01:50
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
Hi. T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1. Command ''zpool export f3-2'' is hung for 30 minutes now and still is going. Nothing else is running on the server. I can see one CPU being 100% in SYS like: bash-3.00# mpstat 1 [...] CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 0 0 0 67 220 110 20 0 0 0 0
2009 Apr 01
4
ZFS Locking Up periodically
I''ve recently re-installed an X4500 running Nevada b109 and have been experiencing ZFS lock ups regularly (perhaps once every 2-3 days). The machine is a backup server and receives hourly ZFS snapshots from another thumper - as such, the amount of zfs activity tends to be reasonably high. After about 48 - 72 hours, the file system seems to lock up and I''m unable to do anything
2008 Jun 16
3
[Bug 2247] New: tests/functional/cli_root/zpool_upgrade/ zpool_upgrade_007_pos panics - zfs snapshot
http://defect.opensolaris.org/bz/show_bug.cgi?id=2247 Summary: tests/functional/cli_root/zpool_upgrade/zpool_upgrade_00 7_pos panics - zfs snapshot Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: major Priority:
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
Hi all. One of our server had a panic and now can''t mount the zpool anymore! Here is what I get at boot: Mar 21 11:09:17 SERVER142 ^Mpanic[cpu1]/thread=ffffffff90878200: Mar 21 11:09:17 SERVER142 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x67 00009000), file: ../../common/fs/zfs/space_map.c, line: 126 Mar 21 11:09:17 SERVER142
2007 Oct 12
0
zfs: allocating allocated segment(offset=77984887808 size=66560)
how does one free segment(offset=77984887808 size=66560) on a pool that won''t import? looks like I found http://bugs.opensolaris.org/view_bug.do?bug_id=6580715 http://mail.opensolaris.org/pipermail/zfs-discuss/2007-September/042541.html when I luupgrade a ufs partition with a dvd-b62 that was bfu to b68 with a dvd of b74 it booted fine and I was doing the same thing that I had done on
2007 Nov 27
5
Dtrace probes for voluntary and involuntary context switches
Hi, I am profiling some workloads for the voluntary and involuntary context switches. I am interested in finding out the reasons causing these two types of context switches. As far as I understand, involuntary context switch happens on expiration of time slice or when a higher priority process comes in. While the voluntary switch generally happens when a process is waiting for I/O etc. So to
2011 May 03
4
multipl disk failures cause zpool hang
Hi, There seems to be a few threads about zpool hang, do we have a workaround to resolve the hang issue without rebooting ? In my case, I have a pool with disks from external LUNs via a fiber cable. When the cable is unplugged while there is IO in the pool, All zpool related command hang (zpool status, zpool list, etc.), put the cable back does not solve the problem. Eventually, I
2008 Apr 28
3
[Bug 1657] New: tests/functional/acl/nontrivial/ zfs_acl_cp_001_pos causes panic
http://defect.opensolaris.org/bz/show_bug.cgi?id=1657 Summary: tests/functional/acl/nontrivial/zfs_acl_cp_001_pos causes panic Classification: Development Product: zfs-crypto Version: unspecified Platform: Other OS/Version: Solaris Status: NEW Severity: critical Priority: P2
2010 Aug 30
5
pool died during scrub
I have a bunch of sol10U8 boxes with ZFS pools, most all raidz2 8-disk stripe. They''re all supermicro-based with retail LSI cards. I''ve noticed a tendency for things to go a little bonkers during the weekly scrub (they all scrub over the weekend), and that''s when I''ll lose a disk here and there. OK, fine, that''s sort of the point, and they''re
2009 Jan 27
5
Replacing HDD in x4500
The vendor wanted to come in and replace an HDD in the 2nd X4500, as it was "constantly busy", and since our x4500 has always died miserably in the past when a HDD dies, they wanted to replace it before the HDD actually died. The usual was done, HDD replaced, resilvering started and ran for about 50 minutes. Then the system hung, same as always, all ZFS related commands would just
2010 Jun 25
11
Maximum zfs send/receive throughput
It seems we are hitting a boundary with zfs send/receive over a network link (10Gb/s). We can see peak values of up to 150MB/s, but on average about 40-50MB/s are replicated. This is far away from the bandwidth that a 10Gb link can offer. Is it possible, that ZFS is giving replication a too low priority/throttling it too much?
2010 Feb 08
5
zfs send/receive : panic and reboot
<copied from opensolaris-dicuss as this probably belongs here.> I kept on trying to migrate my pool with children (see previous threads) and had the (bad) idea to try the -d option on the receive part. The system reboots immediately. Here is the log in /var/adm/messages Feb 8 16:07:09 amber unix: [ID 836849 kern.notice] Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ffffff014ba86e40: Feb 8
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool. root@:/$ zpool status panic[cpu1]/thread=fffffe8000758c80: assertion failed: