search for: findstack

Displaying 13 results from an estimated 13 matches for "findstack".

Did you mean: fillstack
2011 Jan 18
4
Zpool Import Hanging
...he machine was rebooted, it hung at the ?Loading ZFS filesystems? line after loading the kernel; I booted the box with all drives unplugged and exported the pool. The machine was rebooted, and now the pool is hanging on import (zpool import ?Fn Nalgene). I?m using ?0t2761::pid2proc|::walk thread|::findstack" | mdb ?k? to try and view what the import processes is doing, but I?m not a hard-core ZFS/Solaris dev so I don?t know if I?m reading the output correctly, but it appears that ZFS is continuing to delete a snapshot/FS from before (reading from the top down): stack pointer for thread ffffff0...
2006 Jul 30
6
zfs mount stuck in zil_replay
...sbin/sh /lib/svc/method/fs-local 254 /usr/sbin/zfs mount -a [...] bash-3.00# zfs list|wc -l 46 Using df I can see most file systems are already mounted. > ::ps!grep zfs R 254 163 7 7 0 0x4a004000 00000600219e1800 zfs > 00000600219e1800::walk thread|::findstack -v stack pointer for thread 300013026a0: 2a10069ebb1 [ 000002a10069ebb1 cv_wait+0x40() ] 000002a10069ec61 txg_wait_synced+0x54(3000052f0d0, 2f2216d, 3000107fb90, 3000052f110, 3000052f112, 3000052f0c8) 000002a10069ed11 zil_replay+0xbc(60022609c08, 600226a1840, 600226a1870, 700d13d8, 7ba2d000, 60...
2009 Nov 11
2
ls -l hang, process unkillable
hello, one of my colleague has a problem with an application. the sysadmins, responsible for that server told him that it was the applications fault, but i think they are wrong, and so does he. from time to time, the app gets unkillable and when trying to list the contents of some dir which is being read/written by the app, "ls" can list the contents, but "ls -l" gets stuck
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
...and here is the stack, hopefully this someone a hint at what the issue is, I have scrubbed the pool and no errors were found, and zdb -l reports no issues that I can see. ::ps ! grep find R 1248 1243 1248 1243 101 0x4a004000 ffffff02630d5728 find > ffffff02630d5728::walk thread | ::findstack stack pointer for thread ffffff025f15b3e0: ffffff000cb54650 [ ffffff000cb54650 _resume_from_idle+0xf1() ] ffffff000cb54680 swtch+0x145() ffffff000cb546b0 cv_wait+0x61() ffffff000cb54700 txg_wait_synced+0x7c() ffffff000cb54770 zil_replay+0xe8() ffffff000cb54830 zvol_create_minor+0x2...
2008 Dec 28
2
zfs mount hangs
...ed. Dec 27 04:47:56 base Use is subject to license terms. My guess would be a broken CPU, Maybe the old Ecache-problem... Anyway, "zfs mount space" works fine, but "zfs mount space/postfix" hangs. A look at the zfs-process shows: # echo "0t236::pid2proc|::walk thread|::findstack -v" | mdb -k stack pointer for thread 30001cecc00: 2a100fa2181 [ 000002a100fa2181 cv_wait+0x3c() ] 000002a100fa2231 txg_wait_open+0x58(60014aa1158, d000b, 0, 60014aa119c, 60014aa119e, 60014aa1150) 000002a100fa22e1 dmu_tx_assign+0x3c(60022dd3780, 1, 7, 60013cd5918, 5b, 1) 000002...
2006 Jan 13
26
A couple of issues
I''ve been testing ZFS since it came out on b27 and this week I BFUed to b30. I''ve seen two problems, one I''ll call minor and the other major. The hardware is a Dell PowerEdge 2600 with 2 3.2GHz Xeons, 2GB memory and a perc3 controller. I have created a filesystem for over 1000 users on it and take hourly snapshots, which destroy the one from 24 hours ago, except the
2010 Jun 25
11
Maximum zfs send/receive throughput
It seems we are hitting a boundary with zfs send/receive over a network link (10Gb/s). We can see peak values of up to 150MB/s, but on average about 40-50MB/s are replicated. This is far away from the bandwidth that a 10Gb link can offer. Is it possible, that ZFS is giving replication a too low priority/throttling it too much?
2006 Mar 30
8
iostat -xn 5 _donot_ update: how to use DTrace
on Solaris 10 5.10 Generic_118822-23 sun4v sparc SUNW,Sun-Fire-T200 I run #iostat -xn 5 to monitor the IO statistics on SF T2000 server. The system also have a heavy IO load, for some reason iostat donot refresh (no any update). It seems like iostat is calling pause() and stucked there. Also my HBA driver''s interrupt stack trace indicates there is a lot of swtch(), the overall IOPS
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
...tor. bash-3.00# mdb -k Loading modules: [ unix krtld genunix specfs dtrace ufs sd px md ip sctp usba fcp fctl qlc crypto lofs zfs random nfs ptm ssd logindmux cpc fcip ] > ::ps!grep zpool R 17472 15928 17472 15928 0 0x4a004000 00000303b55647f8 zpool > 00000303b55647f8::walk thread|::findstack -v stack pointer for thread 307939f6fc0: 2a102d5cf41 [ 000002a102d5cf41 cv_wait+0x38() ] 000002a102d5cff1 txg_wait_synced+0x54(6000c977d90, 337e86, 337e84, 6000c977dd0, 6000c977dd2, 6000c977d88) 000002a102d5d0a1 zfs_sync+0xb4(6000ca16d00, 0, 6000ca00440, 6000ca00494, 0, 0) 000002a102d5d151 do...
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool. root@:/$ zpool status panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2007 Feb 12
17
NFS/ZFS performance problems - txg_wait_open() deadlocks?
...ited many threads but 651 of them were still there. Then it took almost 3 minutes to completely exit nfsd - during all those 3 minutes all 651 threads were there and then suddenly in seconds it exited all threads. While it was hanging I did: > ::ps!grep nfsd > 000006000a729068::walk thread|::findstack -v with output logged to a file. I attached output as nfsd.txt.gz You can see that one thread was in: stack pointer for thread 30003621660: 2a1038c7021 [ 000002a1038c7021 cv_wait+0x3c() ] 000002a1038c70d1 exitlwps+0x10c(0, 100000, 42100002, 6000a729068, 6000a72912e, 42000002) 000002a1038c7181 pr...
2007 Dec 09
8
zpool kernel panics.
Hi Folks, I''ve got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris 10 280r (SPARC) server. The message I get on panic is this: panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment (offset=423713792 size=1024) This seems to come about when the zpool is being used or being scrubbed - about twice a day at the moment. After the reboot, the scrub seems to have
2008 Nov 29
75
Slow death-spiral with zfs gzip-9 compression
I am [trying to] perform a test prior to moving my data to solaris and zfs. Things are going very poorly. Please suggest what I might do to understand what is going on, report a meaningful bug report, fix it, whatever! Both to learn what the compression could be, and to induce a heavy load to expose issues, I am running with compress=gzip-9. I have two machines, both identical 800MHz P3 with