similar to: dmu_zfetch_find - lock contention?

Displaying 14 results from an estimated 14 matches similar to: "dmu_zfetch_find - lock contention?"

2012 Aug 21
0
IDMAP cache creating tons of mulex spins
Good morning, We have been noticing troubles browsing on a ZFS share, especially in the afternoon, and found our 8 cores going at 100% with over 100000 smtx running on each core on mpstat. We are running Solaris 5.11 with Samba 3.5.10, 48 GB of RAM and two 4 core Xeons. The fileserver is attached by domain mode to Windows 2003 R2 SP2 with Services for Unix installed and we only have around 80
2007 Aug 23
1
EOF broken on zvol raw devices?
> I tried to copy a 8GB Xen domU disk image from a zvol device > to an image file on an ufs filesystem, and was surprised that > reading from the zvol character device doesn''t detect "EOF". > > On snv_66 (sparc) and snv_73 (x86) I can reproduce it, like this: > > # zfs create -V 1440k tank/floppy-img > > # dd if=/dev/zvol/dsk/tank/floppy-img
2013 May 15
1
still mbuf leak in 9.0 / 9.1?
Hi list, since we activated 10gbe on ixgbe cards + jumbo frames(9k) on 9.0 and now on 9.1 we recognize that after a random period of time, sometimes a week, sometimes only a day, the system doesn't send any packets out. The phenomenon is that you can't login via ssh, nfs and istgt is not operative. Yet you can login on the console and execute commands. A clean shutdown isn't possible
2007 Nov 27
0
zpool detech hangs causes other zpool commands, format, df etc.. to hang
Customer has a Thumper running: SunOS x4501 5.10 Generic_120012-14 i86pc i386 i86pc where running "zpool detech disk c6t7d0" to detech a mirror causes zpool command to hang with following kernel stack trace: PC: _resume_from_idle+0xf8 CMD: zpool detach disk1 c6t7d0 stack pointer for thread fffffe84d34b4920: fffffe8001c30c10 [ fffffe8001c30c10 _resume_from_idle+0xf8() ]
2007 Jul 23
12
GRUB, zfs-root + Xen: Error 16: Inconsistent filesystem structure
Hi Lin, In addition to bug 6541114... Bug ID 6541114 Synopsis GRUB/ZFS fails to load files from a default compressed (lzjb) root ... I found yet another way to get the "Error 16: Inconsistent filesystem structure" from GRUB. This time when trying to boot a Xen Dom0 from a zfs bootfs Synopsis: grub/zfs-root: cannot boot xen from a zfs root
2006 Sep 22
1
Linux Dom0 <-> Solaris prepared Volume
Hi all heve been trying (in vain) to get a Solaris b44 DomU (dowloaded from Sun) running on a Linux Xenhost I followed exactly how, and it looked ok when it starts booting... But it never boots . adapted the configfile to boot with -v (that I can see at least something) and this is what I get ===SNIP=== root@Xen-VT02:/export/xc/xvm/solaris-b44# xm create solaris-b44-64.py -c Using config
2007 Apr 03
2
ZFS panics with dmu_buf_hold_array
Hi, I have been wrestling with ZFS issues since yesterday when one of my disks sort of died. After much wrestling with "zpool replace" I managed to get the new disk in and got the pool to resilver, but since then I have one error left that I can''t clear: pool: data state: ONLINE status: One or more devices has experienced an error resulting in data corruption.
2008 Apr 01
4
panic ... recursive mutex_enter installing a snv_85 domU on Linux dom0
While attempting to install snv_85 as a domU, Debian etch as the dom0, the kernel panics before the rest of the system starts to boot. Currently I''m using the sid releases of xen 3.2 found in Debian sid. I stopped using 3.0.3, as it appears I''m unable to use a ramdisk setting. # dmesg | grep -i mem ... Memory: 1903728k/1949252k available (1623k kernel code, 36208k reserved,
2011 Nov 25
1
Recovering from kernel panic / reboot cycle importing pool.
Yesterday morning I awoke to alerts from my SAN that one of my OS disks was faulty, FMA said it was in hardware failure. By the time I got to work (1.5 hours after the email) ALL of my pools were in a degraded state, and "tank" my primary pool had kicked in two hot spares because it was so discombobulated. ------------------- EMAIL ------------------- List of faulty resources:
2006 Jul 17
11
ZFS bechmarks w/8 disk raid - Quirky results, any thoughts?
Hi All, I''ve just built an 8 disk zfs storage box, and I''m in the testing phase before I put it into production. I''ve run into some unusual results, and I was hoping the community could offer some suggestions. I''ve bascially made the switch to Solaris on the promises of ZFS alone (yes I''m that excited about it!), so naturally I''m looking
2007 Feb 26
15
Efficiency when reading the same file blocks
if you have N processes reading the same file sequentially (where file size is much greater than physical memory) from the same starting position, should I expect that all N processes finish in the same time as if it were a single process? In other words, if you have one process that reads blocks from a file, is it "free" (meaning no additional total I/O cost) to have another process
2013 Oct 26
2
[PATCH] 1. changes for vdiskadm on illumos based platform
2. update ZFS in libfsimage from illumos for pygrub diff -r 7c12aaa128e3 -r c2e11847cac0 tools/libfsimage/Rules.mk --- a/tools/libfsimage/Rules.mk Thu Oct 24 22:46:20 2013 +0100 +++ b/tools/libfsimage/Rules.mk Sat Oct 26 20:03:06 2013 +0400 @@ -2,11 +2,19 @@ include $(XEN_ROOT)/tools/Rules.mk CFLAGS += -Wno-unknown-pragmas -I$(XEN_ROOT)/tools/libfsimage/common/
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
Hi. T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1. Command ''zpool export f3-2'' is hung for 30 minutes now and still is going. Nothing else is running on the server. I can see one CPU being 100% in SYS like: bash-3.00# mpstat 1 [...] CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 0 0 0 67 220 110 20 0 0 0 0
2007 May 02
41
gzip compression throttles system?
I just had a quick play with gzip compression on a filesystem and the result was the machine grinding to a halt while copying some large (.wav) files to it from another filesystem in the same pool. The system became very unresponsive, taking several seconds to echo keystrokes. The box is a maxed out AMD QuadFX, so it should have plenty of grunt for this. Comments? Ian