search for: nfsstats

Displaying 20 results from an estimated 31 matches for "nfsstats".

Did you mean: nfsstat
2005 Sep 02
0
nfsstat -m equivalent
Is there an equivalent command to nfsstat -m on Centos 3.4? On solaris it reports statistics for each NFS mounted file system i.e.: /mount/point from host:/some/path Flags: vers=3,proto=tcp,sec=sys,hard,intr,link,symlink,acl,rsize=32768,wsize=32768,retrans=5,timeo=50 Attr cache: acregmin=3,acregmax=60,acdirmin=30,acdirmax=60 On a related topic, how can I make a Centos3.4 nfs server
2013 Feb 14
1
NFS resources, how to check version
Hello, I set up NFSv4 server. To make sure I set vfs.nfsd.server_min_nfsvers=4. I can check its version, for example, by tcpduming and then I can see in wireshark lines like: Network File System Program Version: 4 V4 Procedure: COMPOUND.... .... is there any easier way to check its version? I see there is nfsstat -e option which shows delegs and locks. But all other ones are combined with nfsv3
2012 Jul 04
1
dovecot and nfs readdir vs readdirplus operations
Hello, We are having performance problems trying to migrate our pop/imap servers to a new version. Our old servers are 4 debian lenny with 5GB of RAM running of XenServer VMs with kernel 2.6.32-4-amd64 and dovecot 1.1.16. New servers are 4 ubuntu 12.04 with dovecot 2.1.5 running on vmware vm with 6 cores and 16GB of RAM and kernel 3.2.0-24-generic. On both server we are using nfs 3 with
2009 Sep 10
3
Excessive NFS operations
Reading the "waiting IOs" thread made me remember I have a similar problem that has been here for months, and I have no sulution yet. A single CentOS 5.2 x86_64 machine here is overloading our NetApp filer with excessive NFS getattr, lookup and access operations. The weird thing is that the number of these operations increases over time. I have an mrtg graph (which I didn't
2014 Oct 08
0
centos 7, docker, NFS and uid = -2
I created a centos 7 docker container in which I want to mount a NFS share in. Said share is owned by user virtual with uid 1200. So I do some exporting (docker container is in 172.17.0.0/16): spindizzy> cat /etc/exports /export 10.0.0.0/24(ro,fsid=0,no_subtree_check,sync) 172.17.0.0/16(ro,fsid=0,no_subtree_check,sync) [...] /export/mail 172.17.0.0/16(rw,root_squash,no_subtree_check,sync)
2005 Aug 18
4
Closing information leaks in jails?
Hello, I'm wondering about closing some information leaks in FreeBSD jails from the "outside world". Not that critical (depends on the application), but a simple user, with restricted devfs in the jail (devfsrules_jail for example from /etc/defaults/devfs.rules) can figure out the following: - network interfaces related data, via ifconfig, which contains everything, but the
2010 Jun 17
9
Monitoring filessytem access
When somebody is hammering on the system, I want to be able to detect who''s doing it, and hopefully even what they''re doing. I can''t seem to find any way to do that. Any suggestions? Everything I can find ... iostat, nfsstat, etc ... AFAIK, just show me performance statistics and so forth. I''m looking for something more granular. Either *who* the
2008 Jul 06
2
Measuring ZFS performance - IOPS and throughput
Can anybody tell me how to measure the raw performance of a new system I''m putting together? I''d like to know what it''s capable of in terms of IOPS and raw throughput to the disks. I''ve seen Richard''s raidoptimiser program, but I''ve only seen results for random read iops performance, and I''m particularly interested in write
2007 Feb 04
3
Refused oplock on NFS mounted file system
Hi all, I have been a loyal user of SAMBA since about 1996(?), and I recently upgraded a RH linux based system from by skipping a few generations and going directly from rh9 to FC5. Mostly everything is working, but there is an issue I haven't worked out yet with SAMBA. The main server here in our group has several shares, including some that are mounted from other servers via nfs,
2003 Jun 11
1
nfs panic with umount -f
...list 1577 register caddr_t cp; 1578 register int32_t t1, t2; 1579 caddr_t bpos, dpos, cp2; 1580 int error = 0, wccflag = NFSV3_WCCRATTR; 1581 struct mbuf *mreq, *mrep, *md, *mb, *mb2; 1582 int v3 = NFS_ISV3(dvp); 1583 1584 nfsstats.rpccnt[NFSPROC_REMOVE]++; 1585 nfsm_reqhead(dvp, NFSPROC_REMOVE, 1586 NFSX_FH(v3) + NFSX_UNSIGNED + nfsm_rndup(namelen)); (kgdb) p *proc Cannot access memory at address 0x0. (kgdb) p *dvp $3 = {v_flag = 0, v_usecount = 9, v_writecount = 0, v_holdcnt = 0, v_id = 427...
2020 May 15
2
CentOS7 and NFS
The number of threads has nothing to do with the number of cores on the machine. It depends on the I/O, network speed, type of workload etc. We usually start with 32 threads and increase if necessary. You can check the statistics with: watch 'cat /proc/net/rpc/nfsd | grep th? Or you can check on the client nfsstat -rc Client rpc stats: calls retrans authrefrsh 1326777974 0
2023 Apr 01
1
clients not connecting to samba shares
On 01/04/2023 16:15, Gary Dale via samba wrote: >> >> The problem is, you shouldn't really have Linux groups per se, you >> should have Windows groups that are also Linux groups i.e. everything >> is in AD. > > That's not a great idea. It would mean I'd have to modify every Linux > system. Possibly > And can Linux groups even have a domain let
2008 Apr 15
4
NFS Performance
Hi, With help from Oleg we got the right patches applied and NFS working well. Maximum performance was about 60 MB/sec. Last week that dropped to about 12.5 MB/sec and I cannot find a reason. Lustre clients all obtain 100+ MB/sec on GigE. Each OST is good for 270 MB/sec. When mounting the client on one of the OSSs I get 230 MB/sec. Seems the speed is there. How can NFS and Lustre be tuned
2017 Nov 01
0
How to limit Apple Mail (desktop)?
"@lbutlr" <kremels at kreme.com> writes: (Are you the OP, or have I mistakenly atributed this to Rupert Gallagher?) >> So what the composition of all this traffic? Are you saying the mail >> client is ultra dumb and repeatedly downloading entire messages, read >> and unread, attachment and all (i.e. you're truly bandwidth limited?) > > It most
2006 Apr 06
0
NFSv3 and File operation scripts
Howdy, I wrote a pair of scripts to measure file and NFSv3 operations, and thought I would share them with the folks on the list. You can view the script output by pointing your browser at the following URLs: Per Process NFSv3 Client statistics (inspired by fsstat/nfsstat): http://daemons.net/~matty/code/nfsclientstats.pl.txt Per Process File Operations (inspired by fsstat):
2008 May 20
1
I need some NFS explanations, please.
I have a problem with NFS that I can't start to resolve. All servers are CentOS 5 servers. One server exports a directory and two others mount it. Simple so far. A file is created on the server, and the two nfs clients do an "ls -al" and get a common (meaning the same) result as the server. Over the course of a day, and after the file on the server has been modified on the
2010 Oct 18
1
Real time NFS monitoring
Hi all, Is there any tool we can use to see on NFS: 1. What files are being accessed 2. The performance (bandwidth, etc) Thank you.
2004 Jul 06
0
destroyed files using shares on nfs-mounted filesystem
Hallo, we are using samba V.3.0.2a on Linux 2.4.18 Previous Versions of samba showed the same effect. The linux-box has mounted nfs-shares from solaris SunOS 5.8 This nfs-connection is a WAN-connection, that has limited bandwidth(some Megabit/second). When connection is slow, then it happened often that saving a file to this share is disturbed. Sometime it simply hangs, sometime an error
2015 Apr 29
1
nfs (or tcp or scheduler) changes between centos 5 and 6?
--On Wednesday, April 29, 2015 08:35:29 AM -0500 Matt Garman <matthew.garman at gmail.com> wrote: > All indications are that CentOS 6 seems to be much more "aggressive" > in how it does NFS reads. And likewise, CentOS 5 was very "polite", > to the point that it basically got starved out by the introduction of > the 6.5 boxes. Some things come to mind as
2009 Aug 26
1
Load spikes on NFS server, multiple index updaters.
We are occasionally experiencing trouble where the NFS server's load will shoot over 60+. (Normal of sub 1.0). I have been hunting this for a while, and I believe it comes down to "deliver". System setup: NFS servers: x4540 Solaris 10 x64 ZFS over NFS. NFS clients: Solaris 10 x64 postfix-2.4.1 with dovecot-1.1.11 deliver. What appears to happen, when I check for nfsstat per