search for: _resume_from_idle

Displaying 9 results from an estimated 9 matches for "_resume_from_idle".

2009 Apr 01
4
ZFS Locking Up periodically
I''ve recently re-installed an X4500 running Nevada b109 and have been experiencing ZFS lock ups regularly (perhaps once every 2-3 days). The machine is a backup server and receives hourly ZFS snapshots from another thumper - as such, the amount of zfs activity tends to be reasonably high. After about 48 - 72 hours, the file system seems to lock up and I''m unable to do anything
2010 Jun 28
23
zpool import hangs indefinitely (retry post in parts; too long?)
Now at 36 hours since zdb process start and: PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209 Idling at 0.2% processor for nearly the past 24 hours... feels very stuck. Thoughts on how to determine where and why? -- This message posted from opensolaris.org
2007 Nov 27
0
zpool detech hangs causes other zpool commands, format, df etc.. to hang
Customer has a Thumper running: SunOS x4501 5.10 Generic_120012-14 i86pc i386 i86pc where running "zpool detech disk c6t7d0" to detech a mirror causes zpool command to hang with following kernel stack trace: PC: _resume_from_idle+0xf8 CMD: zpool detach disk1 c6t7d0 stack pointer for thread fffffe84d34b4920: fffffe8001c30c10 [ fffffe8001c30c10 _resume_from_idle+0xf8() ] swtch+0x110() cv_wait+0x68() spa_config_enter+0x50() spa_vdev_enter+0x2a() spa_vdev_detach+0x39() zfs_ioc_vdev_detach+0x48()...
2011 May 03
4
multipl disk failures cause zpool hang
Hi, There seems to be a few threads about zpool hang, do we have a workaround to resolve the hang issue without rebooting ? In my case, I have a pool with disks from external LUNs via a fiber cable. When the cable is unplugged while there is IO in the pool, All zpool related command hang (zpool status, zpool list, etc.), put the cable back does not solve the problem. Eventually, I
2009 Jan 27
5
Replacing HDD in x4500
The vendor wanted to come in and replace an HDD in the 2nd X4500, as it was "constantly busy", and since our x4500 has always died miserably in the past when a HDD dies, they wanted to replace it before the HDD actually died. The usual was done, HDD replaced, resilvering started and ran for about 50 minutes. Then the system hung, same as always, all ZFS related commands would just
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
...the pool and no errors were found, and zdb -l reports no issues that I can see. ::ps ! grep find R 1248 1243 1248 1243 101 0x4a004000 ffffff02630d5728 find > ffffff02630d5728::walk thread | ::findstack stack pointer for thread ffffff025f15b3e0: ffffff000cb54650 [ ffffff000cb54650 _resume_from_idle+0xf1() ] ffffff000cb54680 swtch+0x145() ffffff000cb546b0 cv_wait+0x61() ffffff000cb54700 txg_wait_synced+0x7c() ffffff000cb54770 zil_replay+0xe8() ffffff000cb54830 zvol_create_minor+0x227() ffffff000cb54850 sdev_zvol_create_minor+0x19() ffffff000cb549c0 devzvol_create_link+0x49...
2010 Jun 25
11
Maximum zfs send/receive throughput
It seems we are hitting a boundary with zfs send/receive over a network link (10Gb/s). We can see peak values of up to 150MB/s, but on average about 40-50MB/s are replicated. This is far away from the bandwidth that a 10Gb link can offer. Is it possible, that ZFS is giving replication a too low priority/throttling it too much?
2011 Jan 18
4
Zpool Import Hanging
...processes is doing, but I?m not a hard-core ZFS/Solaris dev so I don?t know if I?m reading the output correctly, but it appears that ZFS is continuing to delete a snapshot/FS from before (reading from the top down): stack pointer for thread ffffff01ce408e00: ffffff0008f2b1f0 [ ffffff0008f2b1f0 _resume_from_idle+0xf1() ] ffffff0008f2b220 swtch+0x145() ffffff0008f2b250 cv_wait+0x61() ffffff0008f2b2a0 txg_wait_open+0x7a() ffffff0008f2b2e0 dmu_tx_wait+0xb3() ffffff0008f2b320 dmu_tx_assign+0x4b() ffffff0008f2b3b0 dmu_free_long_range_impl+0x12b() ffffff0008f2b400 dmu_free_object+0xe6() f...
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool. root@:/$ zpool status panic[cpu1]/thread=fffffe8000758c80: assertion failed: