similar to: IDMAP cache creating tons of mulex spins

Displaying 20 results from an estimated 120 matches similar to: "IDMAP cache creating tons of mulex spins"

2009 Jul 09
3
performance troubleshooting
We have a serious performance problem on our server. Here is some data: <pre> > ::memstat Page Summary Pages MB %Tot ------------ ---------------- ---------------- ---- Kernel 1133252 4426 31% Anon 1956988 7644 53% Exec and libs 31104 121 1% Page cache
2008 Jan 17
1
Under DTrace USDT and PID, kernel''s microstat accounting doesn''t work in this situation, doesn''t it?
Does anyone has any ideas about this problem? 2008/1/15, ?? TaoJie <eulertao at gmail.com>: > > Hi all: > > I''m working on revealing system performance now. > My testing program is an infinite loop. Inside the loop, it will do some > mathematical opertaions and call function callee(), then go to the next > loop. > I install a alarm(30) in the program. It
2006 Nov 29
7
how to debug context switching and mutex contentions?
I''m looking for a suggestion on a good way to hunt down the source of high context switching and mutex contentions... Is dtrace the way to go now, or should I stick with something like lockstat? Russ This is a 5 second interval for mpstat: CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 16 0 0 1115 1241 206 9095 912 2420 7393 0 12105 68 25
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
Hi. T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1. Command ''zpool export f3-2'' is hung for 30 minutes now and still is going. Nothing else is running on the server. I can see one CPU being 100% in SYS like: bash-3.00# mpstat 1 [...] CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 0 0 0 67 220 110 20 0 0 0 0
2008 Mar 14
8
xcalls - mpstat vs dtrace
HI, T5220, S10U4 + patches mdb -k > ::memstat While above is working (takes some time, ideally ::memstat -n 4 to use 4 threads could be useful) mpstat 1 shows: CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 48 0 0 1922112 9 0 0 8 0 0 0 15254 6 94 0 0 So about 2mln xcalls per second. Let''s check with dtrace:
2009 Mar 28
4
mac_srs_rx_poll_ring thread never stop polling hardware in kernel
Recently I found that the mac_srs_rx_poll_ring thread may never stop in kernel, please see the following mpstat, cpu 2 is in 100% kernel usage, but no syscalls and no interrupts. CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 0 0 0 0 300 100 0 0 1 0 0 0 0 0 0 100 1 14 0 0 134 68 134 1
2008 Nov 24
3
debugging a faulty jboss/application
>From time to time a jboss process would end eating all, the available CPU and the load avg would skyrocket. Once the operators restarted jboss, the system''d be normal again (sometimes for weeks) until the next incident Since we moved the app from a v440 running Solaris 10 8/07 to a t2000 running Solaris 10 5/08 the problem started to happen more frequently (2-3 times a week). The
2010 Jan 12
0
dmu_zfetch_find - lock contention?
Hi, I have a mysql instance which if I point more load towards it it suddenly gets 100% in SYS as shown below. It can work fine for an hour but eventually it gets to a jump from 5-15% of CPU utilization to 100% in SYS as show in mpstat output below: # prtdiag | head System Configuration: SUN MICROSYSTEMS SUN FIRE X4170 SERVER BIOS Configuration: American Megatrends Inc. 07060215
2007 Mar 14
3
I/O bottleneck Root cause identification w Dtrace ?? (controller or IO bus)
Dtrace and Performance Teams, I have the following IO Performance Specific Questions (and I''m already savy with the lockstat and pre-dtrace utilities for performance analysis.. but in need of details regarding specifying IO bottlenecks @ the controller or IO bus..) : **Q.A*> Determining IO Saturation bottlenecks ( */.. beyond service times and kernel contention.. )/ I''m
2009 Mar 31
9
Hwo to disable the polling function of mac_srs
In crossbow, each mac_srs has a kernel thread called "mac_rx_srs_poll_ring" to poll the hardware and crossbow will wakeup this thread to poll packets from the hardware automatically. Does crossbow provide any method to disable the polling mechanism, for example disabling the this kernel thread? Thanks Zhihui -------------- next part -------------- An HTML attachment was scrubbed... URL:
2010 Jan 08
0
ZFS partially hangs when removing an rpool mirrored disk while having some IO on another pool on another partition of the same disk
Hello, Sorry for the (very) long subject but I''ve pinpointed the problem to this exact situation. I know about the other threads related to hangs, but in my case there was no < zfs destroy > involved, nor any compression or deduplication. To make a long story short, when - a disk contains 2 partitions (p1=32GB, p2=1800 GB) and - p1 is used as part of a zfs mirror of rpool
2006 Jul 30
6
zfs mount stuck in zil_replay
Hello ZFS, System was rebooted and after reboot server again System is snv_39, SPARC, T2000 bash-3.00# ptree 7 /lib/svc/bin/svc.startd -s 163 /sbin/sh /lib/svc/method/fs-local 254 /usr/sbin/zfs mount -a [...] bash-3.00# zfs list|wc -l 46 Using df I can see most file systems are already mounted. > ::ps!grep zfs R 254 163 7 7 0 0x4a004000
2012 Jun 06
24
Occasional storm of xcalls on segkmem_zio_free
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two LSI 9200 controllers and MPxIO) running OpenIndiana 151a4 and I''m occasionally seeing a storm of xcalls on one of the 32 VCPUs (>100000 xcalls a second). The machine is pretty much idle, only receiving a bunch of multicast video streams and
2007 May 02
41
gzip compression throttles system?
I just had a quick play with gzip compression on a filesystem and the result was the machine grinding to a halt while copying some large (.wav) files to it from another filesystem in the same pool. The system became very unresponsive, taking several seconds to echo keystrokes. The box is a maxed out AMD QuadFX, so it should have plenty of grunt for this. Comments? Ian
2008 Mar 13
12
7-disk raidz achieves 430 MB/s reads and 220 MB/s writes on a $1320 box
I figured the following ZFS ''success story'' may interest some readers here. I was interested to see how much sequential read/write performance it would be possible to obtain from ZFS running on commodity hardware with modern features such as PCI-E busses, SATA disks, well-designed SATA controllers (AHCI, SiI3132/SiI3124). So I made this experiment of building a fileserver by
2008 Sep 24
2
PV-GRUB spins at 100% cpu
PV-GRUB is really awesome, however I noticed that it spins at 100% cpu, even while just sitting at its prompt. Just a heads-up... Thanks, -Chris _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2016 Mar 08
1
Monthly spins torrents ??
http://buildlogs.centos.org/monthly/7/ A very useful resource. Updates takes less time after install. Just curious if there are torrents for them? I could commit to running a single seed for the x86_64 "everything" ISO. I dd them onto USB sticks for people (personally recommend the 16GB Mushkin Atom - even when installing from USB2 the media verification is faster than any other
2011 Oct 20
1
tons and tons of clients, oh my!
Hello gluster-verse! I'm about to see if GlusterFS can handle a large amount of clients, this was not at all in plans when we initially selected and setup our current configuration. What sort of experience do you (the collective "you" as in y'all) have with a large client to storage brick server ratio? (~1330:1) Where do you see things going awnry? Most of this will be reads
2006 Apr 06
5
g-w-d.c -> my head spins
Greetings everybody! I started looking more thoroughly at gnome-window-decorator.c and now my head spins and "hurts" and believe that I'm not going to achieve anything serious in terms of tweakable shadows anytime soon. It's far more difficult than I expected. While I (believe) to understand now the shadows are drawn, I currently don't get why there are 12 shadow-quads and
2007 Dec 12
1
Tons of SNMP ?errors? in /var/log/messages
I'm running up-to-date CentOS 5 w/ Xen. I'm getting tons (tons = 13787 just yesterday, presumably because I have a monitoring system poll every 5 minutes) of log entries of the following: netsnmp_assert index == tmp failed if-mib/data_access/interface.c:467 _access_interface_entry_save_name() and netsnmp_assert rc == 0 failed if-mib/ifTable/ifTable_data_access.c:209