similar to: Odd nfs mount problem

Displaying 20 results from an estimated 11000 matches similar to: "Odd nfs mount problem"

2015 Feb 27
1
Odd nfs mount problem [SOLVED]
m.roth at 5-cent.us wrote: > m.roth at 5-cent.us wrote: >> I'm exporting a directory, firewall's open on both machines (one CentOS >> 6.6, the other RHEL 6.6), it automounts on the exporting machine, but >> the >> other server, not so much. >> >> ls /mountpoint/directory eventually times out (directory being the NFS >> mount). mount -t nfs
2015 Feb 27
0
Odd nfs mount problem
m.roth at 5-cent.us wrote: > I'm exporting a directory, firewall's open on both machines (one CentOS > 6.6, the other RHEL 6.6), it automounts on the exporting machine, but the > other server, not so much. > > ls /mountpoint/directory eventually times out (directory being the NFS > mount). mount -t nfs server:/location/being/exported /mnt works... but an > immediate ls
2012 Oct 09
2
Mount options for NFS
We're experiencing problems with some legacy software when it comes to NFS access. Even though files are visible in a terminal and can be accessed with standard shell tools and vi, this software typically complains that the files are empty or not syntactically correct. The NFS filesystems in question are 8TB+ XFS filesystems mounted with
2013 Jun 19
1
XFS inode64 NFS export on CentOS6
Hi, I am trying to get the most out of my 40TB xfs file system and I have noticed that the inode64 mount option gives me a roughly 30% performance increase (besides the other useful things). The problem is that I have to export the filesystem via NFS and I cannot seem to get this working with the current version of nfs-utils (1.2.3). The export option fsid=uuid cannot be used (the standard
2012 Mar 02
1
xfs, inode64, and NFS
we recently deployed some large XFS file systems with centos 6.2 used as NFS servers... I've had some reports of a problem similar to the one reported here... http://www.linuxquestions.org/questions/red-hat-31/xfs-inode64-nfs-export-no_subtree_check-and-stale-nfs-file-handle-message-855844/ these reports are somewhat vague (third indirectly reported via internal corporate channels from
2015 Aug 04
3
xfs question
John R Pierce wrote: > On 8/4/2015 7:14 AM, m.roth at 5-cent.us wrote: >> >> CentOS 6.6 (well, just updated with CR). I have some xfs filesystems >> on a RAID. They've been mounted with the option of defaults. Will it break >> the whole thing if I now change that to inode64, or was that something >> I needed to do when the fs was created, or is there some
2016 Oct 21
3
NFS help
On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: > Larry Martell wrote: >> We have 1 system ruining Centos7 that is the NFS server. There are 50 >> external machines that FTP files to this server fairly continuously. >> >> We have another system running Centos6 that mounts the partition the files >> are FTP-ed to using NFS. > <snip>
2016 Oct 24
3
NFS help
On Fri, Oct 21, 2016 at 11:42 AM, <m.roth at 5-cent.us> wrote: > Larry Martell wrote: >> On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: >>> Larry Martell wrote: >>>> We have 1 system ruining Centos7 that is the NFS server. There are 50 >>>> external machines that FTP files to this server fairly continuously. >>>>
2015 Aug 04
2
xfs question
Hi, folks, CentOS 6.6 (well, just updated with CR). I have some xfs filesystems on a RAID. They've been mounted with the option of defaults. Will it break the whole thing if I now change that to inode64, or was that something I needed to do when the fs was created, or is there some conversion I can run that won't break everything? mark
2012 Jun 11
3
centos 6.2 xfs + nfs space allocation
Centos 6.2 system with xfs filesystem. I'm sharing this filesystem using nfs. When I create a 10 gigabyte test file from a nfs client system : dd if=/dev/zero of=10Gtest bs=1M count=10000 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 74.827 s, 140 MB/s Output from 'ls -al ; du' during this test : -rw-r--r-- 1 root root 429170688 Jun 8 10:13 10Gtest
2013 Jul 03
1
Recommended filesystem for GlusterFS bricks.
Hi, Which is the recommended filesystem to be used for the bricks in glusterFS. ?? XFS/EXT3/EXT4 etc .???? Thanks & Regards, Bobby Jacob Senior Technical Systems Engineer | eGroup P SAVE TREES. Please don't print this e-mail unless you really need to. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2013 Oct 29
1
XFS, inode64, and remount
Hi all, I was recently poking more into the inode64 mount option for XFS filesystems. I seem to recall a comment that you could remount a filesystem with inode64, but then a colleague ran into issues where he did that but was still out of inodes. So, I did more research, and found this posting to the XFS list: http://oss.sgi.com/archives/xfs/2008-05/msg01409.html So for people checking the
2014 Jan 21
2
XFS : Taking the plunge
Hi All, I have been trying out XFS given it is going to be the file system of choice from upstream in el7. Starting with an Adaptec ASR71605 populated with sixteen 4TB WD enterprise hard drives. The version of OS is 6.4 x86_64 and has 64G of RAM. This next part was not well researched as I had a colleague bothering me late on Xmas Eve that he needed 14 TB immediately to move data to from an
2006 Jun 02
1
RPM install vs. NFS mount and code versioning
The ongoing discussion regarding Perl modules and RPMs, prompted me to post an issue I am currently facing. I support a development team that uses various third party tools, some free, some not, some of which are installed via RPMs, and some that are not. The development cycle is 18-24 months long, and then there are many years of support (continuing engineering) after that, for which
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago upgraded to v9 > v10.4. I do not know if the upgrade is related to this new issue. We are seeing a new issue 'error=No space left on device' error
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi,Have you checked inode usage (df -i /lvbackups/brick ) ? Best Regards,Strahil Nikolov On Tuesday, May 2, 2023, 3:05 AM, brandon at thinkhuge.net wrote: Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago
2012 Oct 22
1
CentOS 6 NFS mmap I/O bug?
I'm working with a company who is running into an issue occasionally with their app running CentOS 6 on an NFS mount. The problem is essentially that, from a single CentOS 6 client, the client sometimes gets the wrong file size back from a stat() call. The problem specifically seems to happen after mmap and ftruncate calls. The former envionrment for the application was CentOS 4 where is
2014 Sep 25
1
CentOS 7, xfs
Well, I've set up one of our new JetStors. xfs took *seconds* to put a filesystem on it. We're talking what df -h shows as 66TB. (Pardon me, my mind just SEGV'd on that statement....) Using bonnie++, I found that a) GPT and partitioning gave was insignificantly different than creating an xfs filesystem on a raw disk. I'm more comfortable with the partition, though.
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi, we'd like to use glusterfs for Proxmox and virtual machines with qcow2 disk images. We have a three node glusterfs setup with one volume and Proxmox is attached and VMs are created, but after some time, and I think after much i/o is going on for a VM, the data inside the virtual machine gets corrupted. When I copy files from or to our glusterfs directly everything is OK, I've
2015 Nov 04
5
stale file handle issue [SOLVED]
*sigh* The answer is that the large exported filesystem is a very large XFS... and at least through CentOS 6, upstream has *never* fixed an NFS bug that I find, googling, being complained about in '09: it gags on inodes > 32bit (not sure if that's signed, or unsigned, but....). The answer was to either create, or find an unneeded directory with a < 32bit inode, rename the