Displaying 20 results from an estimated 600 matches similar to: "performance troubleshooting"
2008 Mar 14
8
xcalls - mpstat vs dtrace
HI,
T5220, S10U4 + patches
mdb -k
> ::memstat
While above is working (takes some time, ideally ::memstat -n 4 to use 4 threads could be useful) mpstat 1 shows:
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
48 0 0 1922112 9 0 0 8 0 0 0 15254 6 94 0 0
So about 2mln xcalls per second.
Let''s check with dtrace:
2009 Mar 31
9
Hwo to disable the polling function of mac_srs
In crossbow, each mac_srs has a kernel thread called "mac_rx_srs_poll_ring"
to poll the hardware and crossbow will wakeup this thread to poll packets
from the hardware automatically. Does crossbow provide any method to disable
the polling mechanism, for example disabling the this kernel thread?
Thanks
Zhihui
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2006 Nov 29
7
how to debug context switching and mutex contentions?
I''m looking for a suggestion on a good way to hunt down the source of
high context switching and mutex contentions...
Is dtrace the way to go now, or should I stick with something like lockstat?
Russ
This is a 5 second interval for mpstat:
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
16 0 0 1115 1241 206 9095 912 2420 7393 0 12105 68 25
2007 May 02
41
gzip compression throttles system?
I just had a quick play with gzip compression on a filesystem and the
result was the machine grinding to a halt while copying some large
(.wav) files to it from another filesystem in the same pool.
The system became very unresponsive, taking several seconds to echo
keystrokes. The box is a maxed out AMD QuadFX, so it should have plenty
of grunt for this.
Comments?
Ian
2009 Mar 28
4
mac_srs_rx_poll_ring thread never stop polling hardware in kernel
Recently I found that the mac_srs_rx_poll_ring thread may never stop in
kernel, please see the following mpstat, cpu 2 is in 100% kernel usage, but
no syscalls and no interrupts.
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt
idl
0 0 0 0 300 100 0 0 1 0 0 0
0 0 0 100
1 14 0 0 134 68 134 1
2008 Nov 24
3
debugging a faulty jboss/application
>From time to time a jboss process would end eating all, the available CPU and the load avg would skyrocket.
Once the operators restarted jboss, the system''d be normal again (sometimes for weeks) until the next incident
Since we moved the app from a v440 running Solaris 10 8/07 to a t2000 running Solaris 10 5/08 the problem started to happen more frequently (2-3 times a week). The
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
Hi.
T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1.
Command ''zpool export f3-2'' is hung for 30 minutes now and still is going.
Nothing else is running on the server. I can see one CPU being 100% in SYS like:
bash-3.00# mpstat 1
[...]
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 67 220 110 20 0 0 0 0
2012 Jun 06
24
Occasional storm of xcalls on segkmem_zio_free
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
LSI 9200 controllers and MPxIO) running OpenIndiana 151a4 and I''m
occasionally seeing a storm of xcalls on one of the 32 VCPUs (>100000
xcalls a second). The machine is pretty much idle, only receiving a
bunch of multicast video streams and
2006 Jul 30
6
zfs mount stuck in zil_replay
Hello ZFS,
System was rebooted and after reboot server again
System is snv_39, SPARC, T2000
bash-3.00# ptree
7 /lib/svc/bin/svc.startd -s
163 /sbin/sh /lib/svc/method/fs-local
254 /usr/sbin/zfs mount -a
[...]
bash-3.00# zfs list|wc -l
46
Using df I can see most file systems are already mounted.
> ::ps!grep zfs
R 254 163 7 7 0 0x4a004000
2008 Jan 17
1
Under DTrace USDT and PID, kernel''s microstat accounting doesn''t work in this situation, doesn''t it?
Does anyone has any ideas about this problem?
2008/1/15, ?? TaoJie <eulertao at gmail.com>:
>
> Hi all:
>
> I''m working on revealing system performance now.
> My testing program is an infinite loop. Inside the loop, it will do some
> mathematical opertaions and call function callee(), then go to the next
> loop.
> I install a alarm(30) in the program. It
2012 Aug 21
0
IDMAP cache creating tons of mulex spins
Good morning,
We have been noticing troubles browsing on a ZFS share, especially in
the afternoon, and found our 8 cores going at 100% with over 100000 smtx
running on each core on mpstat. We are running Solaris 5.11 with Samba
3.5.10, 48 GB of RAM and two 4 core Xeons. The fileserver is attached
by domain mode to Windows 2003 R2 SP2 with Services for Unix installed
and we only have around 80
2010 Jan 12
0
dmu_zfetch_find - lock contention?
Hi,
I have a mysql instance which if I point more load towards it it suddenly gets 100% in SYS as shown below. It can work fine for an hour but eventually it gets to a jump from 5-15% of CPU utilization to 100% in SYS as show in mpstat output below:
# prtdiag | head
System Configuration: SUN MICROSYSTEMS SUN FIRE X4170 SERVER
BIOS Configuration: American Megatrends Inc. 07060215
2010 Jan 08
0
ZFS partially hangs when removing an rpool mirrored disk while having some IO on another pool on another partition of the same disk
Hello,
Sorry for the (very) long subject but I''ve pinpointed the problem to this exact situation.
I know about the other threads related to hangs, but in my case there was no < zfs destroy > involved, nor any compression or deduplication.
To make a long story short, when
- a disk contains 2 partitions (p1=32GB, p2=1800 GB) and
- p1 is used as part of a zfs mirror of rpool
2007 Mar 14
3
I/O bottleneck Root cause identification w Dtrace ?? (controller or IO bus)
Dtrace and Performance Teams,
I have the following IO Performance Specific Questions (and I''m already
savy with the lockstat and pre-dtrace
utilities for performance analysis.. but in need of details regarding
specifying IO bottlenecks @ the controller or IO bus..) :
**Q.A*> Determining IO Saturation bottlenecks ( */.. beyond service
times and kernel contention.. )/
I''m
2008 Mar 13
12
7-disk raidz achieves 430 MB/s reads and 220 MB/s writes on a $1320 box
I figured the following ZFS ''success story'' may interest some readers here.
I was interested to see how much sequential read/write performance it would be
possible to obtain from ZFS running on commodity hardware with modern features
such as PCI-E busses, SATA disks, well-designed SATA controllers (AHCI,
SiI3132/SiI3124). So I made this experiment of building a fileserver by
2006 Jul 17
1
sem: negative parameter variances
Dear Spencer and Prof. Fox,
Thank you for your replies. I'll very appreciate, if you have any ideas concerning the problem described below.
First, I'd like to describe the model in brief.
In general I consider a model with three equations.
First one is for annual GRP growth - in general it looks like:
1) GRP growth per capita = G(investment, migration, initial GRP per
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP
guests. If one CPU is doing virtqueue kick and another CPU touches the
vblk->lock it will have to spin until virtqueue kick completes.
This patch reduces system% CPU utilization in SMP guests that are
running multithreaded I/O-bound workloads. The improvements are small
but show as iops and SMP are increased.
Khoa Huynh
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP
guests. If one CPU is doing virtqueue kick and another CPU touches the
vblk->lock it will have to spin until virtqueue kick completes.
This patch reduces system% CPU utilization in SMP guests that are
running multithreaded I/O-bound workloads. The improvements are small
but show as iops and SMP are increased.
Khoa Huynh
2012 Jul 28
1
[PATCH V4 0/3] Improve virtio-blk performance
Hi, Jens & Rusty
This version is rebased against linux-next which resolves the conflict with
Paolo Bonzini's 'virtio-blk: allow toggling host cache between writeback and
writethrough' patch.
Patch 1/3 and 2/3 applies on linus's master as well. Since Rusty will pick up
patch 3/3 so the changes to block core (adding blk_bio_map_sg()) will have a
user.
Jens, could you please
2012 Jul 28
1
[PATCH V4 0/3] Improve virtio-blk performance
Hi, Jens & Rusty
This version is rebased against linux-next which resolves the conflict with
Paolo Bonzini's 'virtio-blk: allow toggling host cache between writeback and
writethrough' patch.
Patch 1/3 and 2/3 applies on linus's master as well. Since Rusty will pick up
patch 3/3 so the changes to block core (adding blk_bio_map_sg()) will have a
user.
Jens, could you please