similar to: Voicemail with NFS (working, I think)

Displaying 20 results from an estimated 3000 matches similar to: "Voicemail with NFS (working, I think)"

2007 Apr 26
1
Re: Voicemail on Different Server, Voicemail with NFS
> -----Original Message----- > From: JR Richardson [mailto:jmr.richardson@gmail.com] > Sent: Saturday, June 17, 2006 2:30 PM > To: asterisk-users@lists.digium.com; Douglas Garstang > Subject: Voicemail with NFS (working, I think) > > I'm using a stand-alone VM server and exporting the VM files ro for > MWI function only. All my registration servers mount the remote
2008 Feb 27
2
NFSroot is acting strange in CentOS5
Hello all, I have observed a problem with a diskless PXE client I am attempting to configure. PXE/NFS/DHCP/TFTPd server is running CentOS5.1 and the Diskless workstation's root and kernel was extracted from a CentOS5.1 (custom kernel due to setting to enable Root File System support). Problem: When the diskless client boots and logs in I notice that my root user is being squashed, even if I
2008 Feb 26
1
PXE client is root squashing ...
Hello all, Though the question is NFS-related, it is in conjunction with the PXE booting. Issue/Scenario: - PXE/DHCP/NFS and tFTP server running CentOS51 and is configured per PXE/Syslinux and RHEL diskless workstation documentation. 1. Attempt to test ability for client to PXE boot over the network and run Root-NFS 2. NFS export in the PXE server is: # # NFS Export Files for qatest1 host #
2012 Jun 19
1
"Too many levels of symbolic links" with glusterfs automounting
I set up a 3.3 gluster volume for another sysadmin and he has added it to his cluster via automount. It seems to work initially but after some time (days) he is now regularly seeing this warning: "Too many levels of symbolic links" $ df: `/share/gl': Too many levels of symbolic links when he tries to traverse the mounted filesystems. I've been using gluster with static mounts
2004 Aug 19
5
[PATCH] use reliable nfs mount options per default
Peter, we found that nfs over udp will corrupt data under very extrem load, there is no way to fix it due to the way how UDP works. TCP will not have these problems. I also wonder why the package size is only 1k. Everyone who wishes a slow connection can pass the desired options via the kernel cmdline. Everyone else prefers probably the fast mount. The defaults should look more like that: ---
2006 Jun 26
0
[klibc 14/43] Remove in-kernel nfsroot code
The in-kernel nfsroot code is obsoleted by kinit. Remove it; it causes conflicts. Signed-off-by: H. Peter Anvin <hpa at zytor.com> --- commit 161e1dc16ec1129b30b634a2a8dcbbd1937800c5 tree c30da837d746fe65d8a13ccf6f27bd381948edb4 parent 018604e070e143657abcf0cb256a1e2dda205d97 author H. Peter Anvin <hpa at zytor.com> Sat, 20 May 2006 16:24:05 -0700 committer H. Peter Anvin <hpa at
2009 May 14
3
if no NFS server clients are waiting..
What can I do, If the NFS server is rebooting/offline? I mean the clients just wait and wait and wait... I tried to set timeo=5,retrans=2 mount options when mounting nfs in fstab on client side = still no luck, clients are just waiting... Can I set a timeout somewhere? :D Thank you for any tips -------------- next part -------------- An HTML attachment was scrubbed... URL:
2008 Jul 25
1
default nfsmount options
Hi, In a discussion on the kernel bugzilla (bug 11154) the question of NFS mount options came up. Trond Myklebust seems to consider the default nfsmount options (V3 over TCP, with timeo=7 and retrans=3) to be bad. I don't know yet if that's my problem, but anyway, would you consider changing the defaults to timeo=600 and retrans=2, if only just to match the default util-linux mount
2014 Feb 19
1
Problems with Windows on KVM machine
Hello. We've started a virtualisation project and got stuck in one moment. Currently we are using the following: Intel 2312WPQJR as a node Intel R2312GL4GS as a storage with Intel Infiniband 2 ports controller Infiniband Mellanox SwitchX IS5023 for commutation. The nodes run CentOS 6.5 with built-in Infiniband package (Linux v0002 2.6.32-431.el6.x86_64), the storage - CentOS 6.4 also built-in
2019 Sep 15
0
nfsmount default timeo=7 causes timeouts on 100 Mbps
I think I got it. Both nfsmount and `mount -t nfs` now default to rsize/wsize = 1 MB. By lowering this to 32K, all issues are gone, even with the default timeo=7. And nfsroot=xxx client responsiveness is a whole lot better. I think when nfsmount was initially written, the default rsize/wsize were much lower, which matched the timeo=7. Now they cause the lags/timeouts that I reported. So
2019 Sep 15
2
nfsmount default timeo=7 causes timeouts on 100 Mbps
I can't explain why 700 msecs aren't enough to avoid timeouts in 100 Mbps networks, but my tests verify it, so I'm writing to the list to request that you increase the default timeo to at least 30, or to 600 which is the default for `mount -t nfs`. How to reproduce: 1) Cabling: server <=> 100 Mbps switch <=> client Alternatively, one can use a 1000 Mbps switch and
2011 Aug 22
2
btrfs over nfs
I have been experimenting exporting btrfs subvolumes over nfs. Main subvolume is filesys1 mounted at /filesys1. Below this is subvolume base, user1 is in base and documents is in user1. documents is mounted at /documents. /etc/exports is: /filesys1/base/user1 172.16.0.0/24(rw,no_acl,no_root_squash,fsid=0) /filesys1/user1-snapshot 172.16.0.0/24(rw,no_acl,no_root_squash,fsid=0)
2019 Oct 07
0
[klibc:master] nfsmount: Use kernel client's default value for timeo option
Commit-ID: 886783e7a10fb7a638bc6034e4cdcb6296cea6a1 Gitweb: http://git.kernel.org/?p=libs/klibc/klibc.git;a=commit;h=886783e7a10fb7a638bc6034e4cdcb6296cea6a1 Author: Ben Hutchings <ben at decadent.org.uk> AuthorDate: Mon, 7 Oct 2019 17:18:41 +0100 Committer: Ben Hutchings <ben at decadent.org.uk> CommitDate: Mon, 7 Oct 2019 17:27:49 +0100 [klibc] nfsmount: Use kernel
2019 Sep 20
3
nfsmount default timeo=7 causes timeouts on 100 Mbps
In case anyone's interested, I followed up in the linux-nfs mailing list: https://marc.info/?l=linux-nfs&m=156887818618861&w=2 Thanks, Alkis On 9/15/19 10:51 AM, Alkis Georgopoulos wrote: > I think I got it. > > Both nfsmount and `mount -t nfs` now default to rsize/wsize = 1 MB. > By lowering this to 32K, all issues are gone, even with the default > timeo=7. And
2019 Jan 06
0
Authentication/Penalty disabled (socket mode=0) introduces constant 5 sec delays (2.27 on debian 9)
Op 20/12/2018 om 18:09 schreef Ludovic Pouzenc: > Hi, > > I hit a bizare problem with dovecot 2.2.7 on debian 9 with LMTP enabled and auth/penalty disabled as documented here : > https://wiki.dovecot.org/Authentication/Penalty > > Use case : I run a swaks command to send an email to an exim4 that tries to make a callout to dovecot-lmtp. > At RCPT TO: swaks hangs
2020 Sep 24
0
Dovecot permission denied errors on NFS after upgrade to 2.2.17
Hi, this sound correct, here is my fstab entry: /XXX.XXX.XXX.XXX:/mail-storage??? /mnt/mail-storage nfs defaults,timeo=30??? ??? ??? ??? 0?????? 0/ Here are my options when doing "mount":
2020 Oct 23
1
dovecot-uidlist invalid data
Hello I have a problem with Invalid data System debian10 dovecot-2.2.36.4 # 2.2.36.4 (baf9232c1): /etc/dovecot/dovecot.conf # Pigeonhole version 0.4.24.rc1 (debaa297) # OS: Linux 4.19.0-12-amd64 x86_64 Debian 10 Oct 23 15:57:52 dovecot6 dovecot: lmtp(33973,media4_js,2KEXD2Dhkl+1hAAAe3x6RQ): Error: Broken file /vmail/me/media4_js/Maildir/dovecot-uidlist line 6875: Invalid data: In debian9 -
2008 Apr 07
1
NFS, acls, proto, and "kernel: svc: unknown version"
Hi all, 1) My NFS3 clients don't display or obey existing non-POSIX ACLs on files of NFS3-mounted exports. 2) setfacl on the client throws error and fails : # setfacl -m u:stowler:rw testfile.text setfacl: testfile.text: Operation not supported 3) at time of client mount the server's /var/log/messages shows "kernel: svc: unknown version (3)". Any thoughts greatly
2018 Nov 15
2
huge increase in storage activity afther dovecot upgrade
Yes, multiple imap servers using one shared nfs storage. With the same config on 2.2.13 the public interface traffic was similar to the storage interface, around 100 mbps. After we switch to 2.2.27 the storage interface traffic jumped 10 times while the public interface stayed the same. This make us thinking that something is wrong and each time a user logs in the whole Inbox content is read
2018 Dec 20
2
Authentication/Penalty disabled (socket mode=0) introduces constant 5 sec delays (2.27 on debian 9)
Hi, I hit a bizare problem with dovecot 2.2.7 on debian 9 with LMTP enabled and auth/penalty disabled as documented here : https://wiki.dovecot.org/Authentication/Penalty Use case : I run a swaks command to send an email to an exim4 that tries to make a callout to dovecot-lmtp. At RCPT TO: swaks hangs 5.0<something-small> seconds then process normally (exim is waiting for callout