search for: sys_syscall32

Displaying 11 results from an estimated 11 matches for "sys_syscall32".

2009 Apr 01
4
ZFS Locking Up periodically
I''ve recently re-installed an X4500 running Nevada b109 and have been experiencing ZFS lock ups regularly (perhaps once every 2-3 days). The machine is a backup server and receives hourly ZFS snapshots from another thumper - as such, the amount of zfs activity tends to be reasonably high. After about 48 - 72 hours, the file system seems to lock up and I''m unable to do anything
2007 Nov 27
0
zpool detech hangs causes other zpool commands, format, df etc.. to hang
...fe8001c30c10 _resume_from_idle+0xf8() ] swtch+0x110() cv_wait+0x68() spa_config_enter+0x50() spa_vdev_enter+0x2a() spa_vdev_detach+0x39() zfs_ioc_vdev_detach+0x48() zfsdev_ioctl+0x13e() cdev_ioctl+0x1d() spec_ioctl+0x50() fop_ioctl+0x25() ioctl+0xac() sys_syscall32+0x101() Other zpool commands, df, format all waiting on a mutex lock spa_namespace_lock to release: PC: _resume_from_idle+0xf8 CMD: zpool status stack pointer for thread fffffe84d34b3ba0: fffffe8001439bf0 [ fffffe8001439bf0 _resume_from_idle+0xf8() ] swtch+0x110() turnstile_bloc...
2010 Jun 28
23
zpool import hangs indefinitely (retry post in parts; too long?)
Now at 36 hours since zdb process start and: PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209 Idling at 0.2% processor for nearly the past 24 hours... feels very stuck. Thoughts on how to determine where and why? -- This message posted from opensolaris.org
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
...pecfs:spec_ioctl+50 () Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047ede0 genunix:fop_ioctl+1a () Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047eec0 genunix:ioctl+ac () Mar 21 11:09:17 SERVER142 genunix: [ID 655072 kern.notice] fffffe800047ef10 unix:sys_syscall32+101 () Mar 21 11:09:17 SERVER142 unix: [ID 100000 kern.notice] Mar 21 11:09:17 SERVER142 genunix: [ID 672855 kern.notice] syncing file systems... This message posted from opensolaris.org
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
...call_dircallback+0xfe() ffffff000cb54c20 devname_lookup_func+0x4cf() ffffff000cb54ca0 devzvol_lookup+0xf8() ffffff000cb54d20 sdev_iter_datasets+0xb0() ffffff000cb54da0 devzvol_readdir+0xd6() ffffff000cb54e20 fop_readdir+0xab() ffffff000cb54ec0 getdents64+0xbc() ffffff000cb54f10 sys_syscall32+0xff() -- DISK--- -bash-4.0$ sudo /usr/sbin/zdb -l /dev/dsk/c7t1d0s0 -------------------------------------------- LABEL 0 -------------------------------------------- version: 22 name: ''puddle'' state: 0 txg: 55553139 pool_guid: 13462109782214169516 hostid...
2007 Nov 25
2
Corrupted pool
...fe80004a1db0 specfs:spec_ioctl+50 () Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1de0 genunix:fop_ioctl+25 () Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1ec0 genunix:ioctl+ac () Nov 24 04:03:35 foo.com genunix: [ID 655072 kern.notice] fffffe80004a1f10 unix:sys_syscall32+101 () Nov 24 04:03:35 foo.com unix: [ID 100000 kern.notice] It appears the ZFS pool on the host is toast, since the box panics each time we try to import it. :( The stack trace from bug #6458218 is similar, but there are enough differences that lead me to question whether it''s the underl...
2007 Aug 26
3
Kernel panic receiving incremental snapshots
...pecfs:spec_ioctl+50 () Aug 25 17:01:50 ldasdata6 genunix: [ID 655072 kern.notice] fffffe8001112de0 genunix:fop_ioctl+25 () Aug 25 17:01:50 ldasdata6 genunix: [ID 655072 kern.notice] fffffe8001112ec0 genunix:ioctl+ac () Aug 25 17:01:50 ldasdata6 genunix: [ID 655072 kern.notice] fffffe8001112f10 unix:sys_syscall32+101 () Aug 25 17:01:50 ldasdata6 unix: [ID 100000 kern.notice] Aug 25 17:01:50 ldasdata6 genunix: [ID 672855 kern.notice] syncing file systems... Aug 25 17:01:51 ldasdata6 genunix: [ID 904073 kern.notice] done Aug 25 17:01:52 ldasdata6 genunix: [ID 111219 kern.notice] dumping to /dev/md/dsk/d3, o...
2010 Jun 25
11
Maximum zfs send/receive throughput
It seems we are hitting a boundary with zfs send/receive over a network link (10Gb/s). We can see peak values of up to 150MB/s, but on average about 40-50MB/s are replicated. This is far away from the bandwidth that a 10Gb link can offer. Is it possible, that ZFS is giving replication a too low priority/throttling it too much?
2011 Jan 18
4
Zpool Import Hanging
...0x78() ffffff0008f2bbf0 spa_import+0xee() ffffff0008f2bc40 zfs_ioc_pool_import+0xc0() ffffff0008f2bcc0 zfsdev_ioctl+0x177() ffffff0008f2bd00 cdev_ioctl+0x45() ffffff0008f2bd40 spec_ioctl+0x5a() ffffff0008f2bdc0 fop_ioctl+0x7b() ffffff0008f2bec0 ioctl+0x18e() ffffff0008f2bf10 sys_syscall32+0xff() I have this in a loop running every 15 secs, and I?ll occasionally see some ddt_* lines as well (current dedup ratio is 1.05). The ratio was originally about 1.09 when I started the import (from zdb ?e Nalgene); is the system doing something special, or is this just ZFS destroying the pen...
2008 Mar 04
5
Network Latency
Hiya, I''m trying to track down some throughput latency that our customer seems to be attributing to our product, I can''t see what he''s talking about, but I want to try and get some deeper granularity than I might get with something like smokeping, and maybe even see if its down to something tunable on our end. I''ve been looking for some examples on how
2007 May 02
41
gzip compression throttles system?
I just had a quick play with gzip compression on a filesystem and the result was the machine grinding to a halt while copying some large (.wav) files to it from another filesystem in the same pool. The system became very unresponsive, taking several seconds to echo keystrokes. The box is a maxed out AMD QuadFX, so it should have plenty of grunt for this. Comments? Ian