Displaying 12 results from an estimated 12 matches for "fsflush".
Did you mean:
fflush
2009 Mar 30
1
fsflush writes very slow
...s 10 x86 system with UFS disk. We''re often only seeing disk write throughput of around 6-8MB/s, even when there is minimal read activity. Running iosnoop shows that most of the physical writes are made by the actual app and average around 32KB. About 15% of the data, however, is done by fsflush and only 4 or 8KB at a time. The write throughput for the fsflush writes is about 10% that of the app writes (using the "DTIME" values and aggregating the results to get totals). CPU resources are not a bottleneck.
If I turn off dopageflush the overall rate jumps to 18-20MB/s. However...
2009 Jan 21
6
nfsv3 provider: "failed to grab process"
...he following simple
script:
#! /usr/sbin/dtrace -s
#pragma D option quiet
nfsv3:::op-read-start {
printf("%s\n", args[1]->noi_curpath);
}
however, when running it, i get the following error:
dtrace: failed to compile script ./nfs2.d: line 5: failed to grab process 3
pid 3 is fsflush:
UID PID PPID C STIME TTY TIME CMD
root 3 0 1 Nov 13 ? 1663:31 fsflush
what am i doing wrong?
thanks,
river.
-----BEGIN PGP SIGNATURE-----
iD8DBQFJd3wFIXd7fCuc5vIRAgcTAKCjExedEOKNlsZl63yLzZeyXz/HyACgqZY3
gko1vjR33bZqsorMIdvMidg=
=Jmcf
-----END PGP SIGNATURE...
2006 Oct 31
0
6413573 deadlock between fsflush() and zfs_create()
Author: maybee
Repository: /hg/zfs-crypto/gate
Revision: 2dd24a5efe8bb703f8a0b96d0cbaab6d4f744456
Log message:
6413573 deadlock between fsflush() and zfs_create()
6416101 du inside snapshot produces bad sizes and paths
Files:
update: usr/src/uts/common/fs/zfs/sys/zfs_znode.h
update: usr/src/uts/common/fs/zfs/zfs_dir.c
update: usr/src/uts/common/fs/zfs/zfs_vnops.c
update: usr/src/uts/common/fs/zfs/zfs_znode.c
2007 Jun 11
1
2 iosnoop scripts: different results
...t; 1024 dad1 W 0.156
bash 1998 /dtrace/mod2 1024 dad1 R 8.807
bash 5184 /usr/bin/ls 8192 dad1 R 10.332
ls 5184 /dtrace/mod2/examples 1024 dad1 R 0.259
fsflush 3 /var/tmp/dtrace-1b 8192 dad1 W 0.278
# io.d
...
sched 0 <none> 1024 dad1 W 0.176
bash 1998 /dtrace/mod2 1024 dad1 R 8.835
bash 5184...
2006 Feb 15
4
Script for Stackdepth by Thread/LWP?
I''m interested in monitoring the amount of stack used by a multi-threaded program. I assume ''stackdepth'' built-in would be useful...but not sure. Been through demo''s, ToolKit, and internals..but it''s just not clicking for me yet.
Not sure how to measure start/end of stack size dynamically...Anyone know how to break this down?
This message posted from
2008 Dec 17
12
disk utilization is over 200%
...start '' matched 7 probes
CPU ID FUNCTION:NAME
8 49675 :tick-5sec
16189 nTrade
/export/data/dbxpt3/logs/ledgers/arinapt3.NTRPT3-MOCA.trans_outmsg.ledger
32768
25456 pt_chmod
/export/data/dbxpt3/logs/NTRPT3-MOCA.log
32768
3 fsflush <none>
38912
25418 pt_chmod
/export/data/dbxpt3/logs/NTRPT3-MOCA.log 49152
21372 tail
/export/data/dbxpt3/logs/NTRPT3-MOCA.log 65536
16189 nTrade
/export/data/dbxpt3/logs/ledgers/arinapt3.NTRP...
2010 Jul 06
2
Jul 06 00:06:15 dict: Error: dict client: Broken handshake
...t things are happening :
# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 0 0 0 Jun 15 ? 0:16 sched
root 1 0 0 Jun 15 ? 0:24 /etc/init -
root 2 0 0 Jun 15 ? 0:00 pageout
root 3 0 0 Jun 15 ? 10:37 fsflush
root 331 1 0 Jun 15 ? 0:00 /usr/lib/saf/sac -t 300
root 334 331 0 Jun 15 ? 0:00 /usr/lib/saf/ttymon
root 153 1 0 Jun 15 ? 0:00 /usr/sbin/rpcbind
root 392 196 0 Jun 16 ? 0:00 in.telnetd
root 75 1 0 Jun 15 ?...
2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
...genunix`ioctl+0x184
unix`syscall_trap+0xac
55
unix`xc_one+0x260
dtrace`dtrace_ioctl+0xe6c
genunix`fop_ioctl+0x20
genunix`ioctl+0x184
unix`syscall_trap+0xac
57
genunix`fsflush_do_pages+0x1f4
genunix`fsflush+0x3e0
unix`thread_start+0x4
62
unix`utl0+0x4c
68
genunix`anon_map_getpages+0x28c
genunix`segvn_fault_anonpages+0x310
genunix`segvn_fault+0x438...
2007 Feb 12
17
NFS/ZFS performance problems - txg_wait_open() deadlocks?
Hi.
System is snv_56 sun4u sparc SUNW,Sun-Fire-V440, zil_disable=1
We see many operation on nfs clients to that server really slow (like 90 seconds for unlink()).
It''s not a problem with network, there''s also plenty oc CPU available.
Storage isn''t saturated either.
First strange thing - normally on that server nfsd has about 1500-2500 number of threads.
I did
2009 Apr 01
4
ZFS Locking Up periodically
I''ve recently re-installed an X4500 running Nevada b109 and have been
experiencing ZFS lock ups regularly (perhaps once every 2-3 days).
The machine is a backup server and receives hourly ZFS snapshots from
another thumper - as such, the amount of zfs activity tends to be
reasonably high. After about 48 - 72 hours, the file system seems to lock
up and I''m unable to do anything
2008 Jan 18
33
LatencyTop
I see Intel has released a new tool. Oh, it requires some patches to
the kernel to record
latency times. Good thing people don''t mind patching their kernels, eh?
So who can write the equivalent latencytop.d the fastest? ;-)
http://www.latencytop.org/
--
cburgess at qnx.com
2010 Jun 28
23
zpool import hangs indefinitely (retry post in parts; too long?)
Now at 36 hours since zdb process start and:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
827 root 4936M 4931M sleep 59 0 0:50:47 0.2% zdb/209
Idling at 0.2% processor for nearly the past 24 hours... feels very stuck. Thoughts on how to determine where and why?
--
This message posted from opensolaris.org