Displaying 17 results from an estimated 17 matches for "fsstat".
Did you mean:
fstat
2009 Jan 24
0
fsstat Provider
I''ve been playing with the fsstat provider lately. I''m curious why its
gone undocumented and largely ignored (per google). I admit that I did
experience some oddities with the data, so I''d also like to rule out
whether or not it has problems which might have caused it to be left in
quietly.
benr.
2008 Jul 05
4
iostat and monitoring
Hi gurus,
I like zpool iostat and I like system monitoring, so I setup a script
within sma to let me get the zpool iostat figures through snmp.
The problem is that as zpool iostat is only run once for each snmp
query, it always reports a static set of figures, like so:
root at exodus:snmp # zpool iostat -v
capacity operations bandwidth
pool used avail read
2010 Mar 06
3
Monitoring my disk activity
Recently, I''m benchmarking all kinds of stuff on my systems. And one
question I can''t intelligently answer is what blocksize I should use in
these tests.
I assume there is something which monitors present disk activity, that I
could run on my production servers, to give me some statistics of the block
sizes that the users are actually performing on the production server.
2007 Jan 10
1
Solaris 10 11/06
...mory on failure
6433408 namespace_reload() can leak memory on allocation failure
6433679 zpool_refresh_stats() has poor error semantics
6433680 changelist_gather() ignores libuutil errors
6433717 offline devices should not be marked persistently unavailble
6435779 6433679 broke zpool import
6436502 fsstat needs to support file systems greater than 2TB
6436514 zfs share on /var/mail needs to be run explicitly after system boots
6436524 importing a bogus pool config can panic system
6436526 delete_queue thread reporting drained when it may not be true
6436800 ztest failure: spa_vdev_attach() returns E...
2012 Jun 06
24
Occasional storm of xcalls on segkmem_zio_free
...enunix`vmem_xfree+0x104
genunix`vmem_free+0x29
genunix`kmem_slab_destroy+0x87
genunix`kmem_slab_free+0x2bb
genunix`kmem_magazine_destroy+0x39a
genunix`kmem_depot_ws_reap+0x66
genunix`taskq_thread+0x285
unix`thread_start+0x8
3221701
This happens in the sched (pid 0) process. My fsstat one looks like this:
# fsstat /content 1
new name name attr attr lookup rddir read read write write
file remov chng get set ops ops ops bytes ops bytes
0 0 0 664 0 952 0 0 0 664 38.0M /content
0 0 0 658 0 935 0...
2006 May 24
1
New features in Solaris Express 05/06
Will the ability to import a destroyed ZFS pool and the fsstat command that''s part of the latest Solaris Express release (B38) make it into Solaris 10 Update-2 when it''s released in June/July??? Also has any decision been made yet what build Update-2 will be taken from to give an idea of what can be expected for ZFS???
This message posted...
2006 Apr 06
0
NFSv3 and File operation scripts
Howdy,
I wrote a pair of scripts to measure file and NFSv3 operations, and
thought I would share them with the folks on the list. You can view
the script output by pointing your browser at the following URLs:
Per Process NFSv3 Client statistics (inspired by fsstat/nfsstat):
http://daemons.net/~matty/code/nfsclientstats.pl.txt
Per Process File Operations (inspired by fsstat):
http://daemons.net/~matty/code/dfsstat.pl.txt
If you think the scripts are useful, you can snag them from my website:
http://daemons.net/~matty
I plan to add NFSv4 support to n...
2009 Apr 12
7
Any news on ZFS bug 6535172?
We''re running a Cyrus IMAP server on a T2000 under Solaris 10 with
about 1 TB of mailboxes on ZFS filesystems. Recently, when under
load, we''ve had incidents where IMAP operations became very slow. The
general symptoms are that the number of imapd, pop3d, and lmtpd
processes increases, the CPU load average increases, but the ZFS I/O
bandwidth decreases. At the same time, ZFS
2012 Jul 04
1
dovecot and nfs readdir vs readdirplus operations
...problem we have is that new servers have performance problems. Even
when have a small part of our total users (about 25%) directed to the
new farm, performance is very poor, even useless.
Looking for NFS problems, we have found a lot of differences in nfs
operations. For example, this is the nfsstat of one of a new servers at
this moment:
myotis21:~# nfsstat
Client rpc stats:
calls retrans authrefrsh
414528349 885 37
Client nfs v3:
null getattr setattr lookup access
readlink
0 0% 95673837 23% 3961938 0% 89586364 21% 110097351 26%
2930961...
2010 Jan 08
0
ZFS partially hangs when removing an rpool mirrored disk while having some IO on another pool on another partition of the same disk
...(prstat, fstat, iostat, mpstat) are consumming some CPU.
- zpool iostat -v tank 5 is frozen (It freezes when I issue a zpool clear rpool c4t0d7s0 in another session)
- iostat -xn is not stuck but shows all zeroes since the very moment zpool iostat froze (which is quite strange if you look at fsstat ouput hereafter). NB: when I say all zeroes, I really mea nit, it''s not zero dot domething, its zero dot zero.
- mpstat shows normal activity (almost nothing since this is a test machine, so only a few percent are used, but it still shows some activity and refreshes correctly)
CPU minf...
2004 Jul 06
0
destroyed files using shares on nfs-mounted filesystem
..., wheter the problem is in linux or in solaris or in samba or elsewhere??
Using the nfs-shares directly from linux with cp seems to work, sometimes fast, sometimes slow. So it seems to be the combination samba and nfs.
Perhaps statistics helps to understand, what happens.
Output from /usr/sbin/nfsstat on linux-side:
...
Client rpc stats:
calls retrans authrefrsh
2776963 6929 0
...
Client nfs v3:
null getattr setattr lookup access readlink
0 0% 2174692 78% 1922 0% 211235 7% 625 0% 87 0%
read write create mkdir symlink...
2006 Jul 21
1
LDA Command time limit exceeded
...981329 11% 818838045 24% 0
0%
read write create mkdir symlink mknod
649838729 19% 502171416 14% 29188492 0% 301741 0% 0 0% 0
0%
remove rmdir rename link readdir readdirplus
54138192 1% 19903 0% 30702957 0% 27796623 0% 1294453 0% 84457031
2%
fsstat fsinfo pathconf commit
48529 0% 31463 0% 0 0% 11339977 0%
Output of iostat:
Linux 2.4.27-3-686-smp (data.clm.net4all.ch) 21. 07. 06
cpu-moy: %user %nice %sys %iowait %idle
0,16 0,00 1,65 0,00 98,19
Device: tps Blk_lus/s Blk_?cr...
2002 Sep 05
2
AIX & Large File Support Problem (+ Solution)
Hi,
I just wanted to relate the solution to a problem I was having to hopefully
save someone else a day of frustration. I'm using rsync-2.5.5 on AIX 4.3,
compiled with gcc 2.95.3. The file I was sync'ing was very large (>2GB).
Despite being configured with --enable-largefiles (which #defines
_LARGE_FILES properly for AIX), and despite the fact that the initial
transfer of said file
2018 Jan 08
0
Re: virtdf outputs on host differs from df in guest
...d on a
read-only mount because it's only required for certain modifications
at ENOSPC that can't be reserved ahead of time (e.g. btree blocks
for an extent split during unwritten extent conversion at ENOSPC).
The numbers above will be slightly more than 5%, because total
blocks reported in fsstat doesn't include things like th space used
by the journal, whereas the reserve pool sizing just works from raw
sizes in the on-disk superblock.
So total fs size is at least 24713 blocks. 5% of that is 1235.6
blocks. The difference in free blocks is 24653 - 23347 = 1306
blocks. It's right in...
2018 Jan 07
3
Re: virtdf outputs on host differs from df in guest
after install libguestfs_xfs, all results are:
[using guestfish]
guestfish -N fs:xfs -m /dev/sda1 statvfs /
bsize: 4096
frsize: 4096
blocks: 24713
bfree: 23391
bavail: 23391
files: 51136
ffree: 51133
favail: 51133
fsid: 2049
flag: 4096
namemax: 255
[using virt-rescure]
virt-rescue -a test1.img
><rescue> mount /dev/sda1 /sysroot
><rescue> stat -f /sysroot
File:
2008 Dec 05
0
resync onnv_105 partial for 6713916
...akefile
usr/src/cmd/mdb/sparc/v9/smbfs/Makefile
usr/src/cmd/mms/mm/common/mm_mmp_mount.c
usr/src/cmd/pools/poolstat/poolstat.c
usr/src/cmd/raidctl/raidctl.c
usr/src/cmd/rcm_daemon/Makefile.com
usr/src/cmd/rcm_daemon/common/vlan_rcm.c
usr/src/cmd/rcm_daemon/common/vnic_rcm.c
usr/src/cmd/stat/fsstat/fsstat.c
usr/src/cmd/stmfadm/stmfadm.c
usr/src/cmd/su/su.c
usr/src/cmd/svc/milestone/net-physical
usr/src/cmd/svc/milestone/network-physical.xml
usr/src/cmd/svc/profile/generic_limited_net.xml
usr/src/cmd/svc/profile/generic_open.xml
usr/src/cmd/truss/codes.c
usr/src/cmd/vna/Makefile
usr/s...
2007 Apr 11
0
raidz2 another resilver problem
...at 6,0 (sd27):
Apr 11 21:47:10 thumper-9.srv offline or reservation conflict
Exporting/importing the pool doesn''t help for those messages.
I rebooted server. It helped to silence above log entries.
I also stopped nfsd so no IOs are issued to the pool except
resilvering.
bash-3.00# fsstat zfs 1
new name name attr attr lookup rddir read read write write
file remov chng get set ops ops ops bytes ops bytes
0 4.50K 1 487K 0 75.3K 65 40.8K 981M 53 233K zfs
0 0 0 0 0 0 0 0 0 0 0 zfs
0 0 0...