Displaying 20 results from an estimated 1000 matches similar to: "default nfsmount options"
2019 Sep 15
2
nfsmount default timeo=7 causes timeouts on 100 Mbps
I can't explain why 700 msecs aren't enough to avoid timeouts in 100
Mbps networks, but my tests verify it, so I'm writing to the list to
request that you increase the default timeo to at least 30, or to 600
which is the default for `mount -t nfs`.
How to reproduce:
1) Cabling:
server <=> 100 Mbps switch <=> client
Alternatively, one can use a 1000 Mbps switch and
2019 Sep 20
3
nfsmount default timeo=7 causes timeouts on 100 Mbps
In case anyone's interested, I followed up in the linux-nfs mailing list:
https://marc.info/?l=linux-nfs&m=156887818618861&w=2
Thanks,
Alkis
On 9/15/19 10:51 AM, Alkis Georgopoulos wrote:
> I think I got it.
>
> Both nfsmount and `mount -t nfs` now default to rsize/wsize = 1 MB.
> By lowering this to 32K, all issues are gone, even with the default
> timeo=7. And
2011 Jan 21
1
Unhandled exception error on single account
I'm running into a strange problem with an unhandled exception when I run a wine command using one account, but not another(!).
I'd very much appreciate any help troubleshooting this, as I must run wine from the account with the problem. I've gone through and cross-checked environment variables and permissions, but nothing obvious pops out.
Here's wine the way it should work on
2000 Nov 08
0
Re: [livid-dev] Re: some comments on the ovd proposal
On Wed, 8 Nov 2000 15:33:48 -0800
Michel LESPINASSE <walken@zoy.org> wrote:
> One reason is i18n of subtitles. You probably dont want to drag full
> unicode fonts in each decoder...
My point was that we don't want to drag full unicode fonts into the *spec*.
On a desktop-class system, you do want to put that complexitiy in the
decoder. Streams will benifit from universally from
2011 Sep 12
6
WINE fails in directories with Question Marks in name
Is there a way to tell WINE that question marks in directory names are okay? WINE fails whenever I ask it to access a file from within a directory with a question mark in its name.
Specifically, my HTPC marks all commercials using WINE and Comskip. Whenever the schedule includes a tv show with a question mark in its title, WINE fails. Here are the logs that illustrate the failure.
Script log
2019 Sep 15
0
nfsmount default timeo=7 causes timeouts on 100 Mbps
I think I got it.
Both nfsmount and `mount -t nfs` now default to rsize/wsize = 1 MB.
By lowering this to 32K, all issues are gone, even with the default
timeo=7. And nfsroot=xxx client responsiveness is a whole lot better.
I think when nfsmount was initially written, the default rsize/wsize
were much lower, which matched the timeo=7.
Now they cause the lags/timeouts that I reported.
So
2019 Oct 07
0
[klibc:master] nfsmount: Use kernel client's default value for timeo option
Commit-ID: 886783e7a10fb7a638bc6034e4cdcb6296cea6a1
Gitweb: http://git.kernel.org/?p=libs/klibc/klibc.git;a=commit;h=886783e7a10fb7a638bc6034e4cdcb6296cea6a1
Author: Ben Hutchings <ben at decadent.org.uk>
AuthorDate: Mon, 7 Oct 2019 17:18:41 +0100
Committer: Ben Hutchings <ben at decadent.org.uk>
CommitDate: Mon, 7 Oct 2019 17:27:49 +0100
[klibc] nfsmount: Use kernel
2004 Aug 19
5
[PATCH] use reliable nfs mount options per default
Peter,
we found that nfs over udp will corrupt data under very extrem load,
there is no way to fix it due to the way how UDP works.
TCP will not have these problems.
I also wonder why the package size is only 1k. Everyone who wishes a
slow connection can pass the desired options via the kernel cmdline.
Everyone else prefers probably the fast mount.
The defaults should look more like that:
---
2019 Sep 21
0
nfsmount default timeo=7 causes timeouts on 100 Mbps
I managed to get to the bottom of this, and filed a bug report for NFS:
https://bugzilla.kernel.org/show_bug.cgi?id=204939
Klibc nfsmount still a bug: it needs to NOT hardcode timeo=7.
Either the NFS defaults should be used,
which result in: timeo=600,rsize=1048576,wsize=1048576,
or at least the kernel documented defaults,
https://www.kernel.org/doc/Documentation/filesystems/nfs/nfsroot.txt
2012 Jun 19
1
"Too many levels of symbolic links" with glusterfs automounting
I set up a 3.3 gluster volume for another sysadmin and he has added it
to his cluster via automount. It seems to work initially but after some
time (days) he is now regularly seeing this warning:
"Too many levels of symbolic links"
$ df: `/share/gl': Too many levels of symbolic links
when he tries to traverse the mounted filesystems.
I've been using gluster with static mounts
2019 Jan 18
0
[klibc:master] nfsmount: support nfsvers= and vers= options
Commit-ID: c4b811a1e4647224ddc717fac59900d16d0e9d4d
Gitweb: http://git.kernel.org/?p=libs/klibc/klibc.git;a=commit;h=c4b811a1e4647224ddc717fac59900d16d0e9d4d
Author: Baptiste Jonglez <baptiste.jonglez at imag.fr>
AuthorDate: Thu, 14 Sep 2017 09:22:21 -0700
Committer: Ben Hutchings <ben at decadent.org.uk>
CommitDate: Wed, 2 Jan 2019 03:08:04 +0000
[klibc] nfsmount: support
2017 Sep 14
0
[PATCH] nfsmount: support nfsvers= and vers= options
The standard mount option nowadays to specify NFS version is "nfsvers", as
documented in nfs(5) on modern Linux systems. Up to now, nfsmount only
supported the old "v2" or "v3" boolean options.
Extend option parsing to support both "nfsvers=X" and "vers=X", with X
being equal to either 2 or 3 (nfsmount does not support NFSv4 at present).
If both
2018 Mar 19
0
get_user_pages returning 0 (was Re: kernel BUG at drivers/vhost/vhost.c:LINE!)
On Mon, Mar 19, 2018 at 4:29 PM, David Sterba <dsterba at suse.cz> wrote:
> On Mon, Mar 19, 2018 at 05:09:28PM +0200, Michael S. Tsirkin wrote:
>> Hello!
>> The following code triggered by syzbot
>>
>> r = get_user_pages_fast(log, 1, 1, &page);
>> if (r < 0)
>> return r;
>> BUG_ON(r != 1);
>>
2011 Aug 22
2
btrfs over nfs
I have been experimenting exporting btrfs subvolumes over nfs. Main
subvolume is filesys1 mounted at /filesys1. Below this is subvolume
base, user1 is in base and documents is in user1. documents is
mounted at /documents. /etc/exports is:
/filesys1/base/user1 172.16.0.0/24(rw,no_acl,no_root_squash,fsid=0)
/filesys1/user1-snapshot 172.16.0.0/24(rw,no_acl,no_root_squash,fsid=0)
2008 Feb 27
2
NFSroot is acting strange in CentOS5
Hello all,
I have observed a problem with a diskless PXE client I am attempting
to configure. PXE/NFS/DHCP/TFTPd server is running CentOS5.1 and the
Diskless workstation's root and kernel was extracted from a CentOS5.1
(custom kernel due to setting to enable Root File System support).
Problem: When the diskless client boots and logs in I notice that my
root user is being squashed, even if I
2014 Feb 19
1
Problems with Windows on KVM machine
Hello. We've started a virtualisation project and got stuck in one moment.
Currently we are using the following:
Intel 2312WPQJR as a node
Intel R2312GL4GS as a storage with Intel Infiniband 2 ports controller
Infiniband Mellanox SwitchX IS5023 for commutation.
The nodes run CentOS 6.5 with built-in Infiniband package (Linux v0002
2.6.32-431.el6.x86_64), the storage - CentOS 6.4 also built-in
2007 Apr 26
1
Re: Voicemail on Different Server, Voicemail with NFS
> -----Original Message-----
> From: JR Richardson [mailto:jmr.richardson@gmail.com]
> Sent: Saturday, June 17, 2006 2:30 PM
> To: asterisk-users@lists.digium.com; Douglas Garstang
> Subject: Voicemail with NFS (working, I think)
>
> I'm using a stand-alone VM server and exporting the VM files ro for
> MWI function only. All my registration servers mount the remote
2009 Jan 19
1
iscsi of a SAN on a DomU
Hi,
i have a debian Etch x86_64 with a xen 3.1 on a kernel 2.6.18-xen.
I have some DomU with Debian Etch.
I installed open-iscsi, configure /etc/iscsi/iscsid.conf:
---
node.active_cnx = 1
node.startup = automatic
#node.session.auth.username = dima
#node.session.auth.password = aloha
node.session.timeo.replacement_timeout = 120
node.session.err_timeo.abort_timeout = 10
2018 Nov 15
2
huge increase in storage activity afther dovecot upgrade
Yes, multiple imap servers using one shared nfs storage. With the same
config on 2.2.13 the public interface traffic was similar to the storage
interface, around 100 mbps.
After we switch to 2.2.27 the storage interface traffic jumped 10 times
while the public interface stayed the same. This make us thinking that
something is wrong and each time a user logs in the whole Inbox content
is read
2018 Mar 19
0
get_user_pages returning 0 (was Re: kernel BUG at drivers/vhost/vhost.c:LINE!)
Hello!
The following code triggered by syzbot
r = get_user_pages_fast(log, 1, 1, &page);
if (r < 0)
return r;
BUG_ON(r != 1);
Just looking at get_user_pages_fast's documentation this seems
impossible - it is supposed to only ever return # of pages
pinned or errno.
However, poking at code, I see at least one path that might cause this: