similar to: CentOS 7, xfs

Displaying 20 results from an estimated 2000 matches similar to: "CentOS 7, xfs"

2012 Mar 02
1
xfs, inode64, and NFS
we recently deployed some large XFS file systems with centos 6.2 used as NFS servers... I've had some reports of a problem similar to the one reported here... http://www.linuxquestions.org/questions/red-hat-31/xfs-inode64-nfs-export-no_subtree_check-and-stale-nfs-file-handle-message-855844/ these reports are somewhat vague (third indirectly reported via internal corporate channels from
2015 Aug 04
3
xfs question
John R Pierce wrote: > On 8/4/2015 7:14 AM, m.roth at 5-cent.us wrote: >> >> CentOS 6.6 (well, just updated with CR). I have some xfs filesystems >> on a RAID. They've been mounted with the option of defaults. Will it break >> the whole thing if I now change that to inode64, or was that something >> I needed to do when the fs was created, or is there some
2013 Jun 19
1
XFS inode64 NFS export on CentOS6
Hi, I am trying to get the most out of my 40TB xfs file system and I have noticed that the inode64 mount option gives me a roughly 30% performance increase (besides the other useful things). The problem is that I have to export the filesystem via NFS and I cannot seem to get this working with the current version of nfs-utils (1.2.3). The export option fsid=uuid cannot be used (the standard
2013 Oct 29
1
XFS, inode64, and remount
Hi all, I was recently poking more into the inode64 mount option for XFS filesystems. I seem to recall a comment that you could remount a filesystem with inode64, but then a colleague ran into issues where he did that but was still out of inodes. So, I did more research, and found this posting to the XFS list: http://oss.sgi.com/archives/xfs/2008-05/msg01409.html So for people checking the
2014 Jan 21
2
XFS : Taking the plunge
Hi All, I have been trying out XFS given it is going to be the file system of choice from upstream in el7. Starting with an Adaptec ASR71605 populated with sixteen 4TB WD enterprise hard drives. The version of OS is 6.4 x86_64 and has 64G of RAM. This next part was not well researched as I had a colleague bothering me late on Xmas Eve that he needed 14 TB immediately to move data to from an
2012 Mar 10
0
XFS inode64 and Gluster 3.2.5 NFS export
Hi, I've recently had dataloss on an XFS (inode64) glusterfs (3.2.5) NFS exported file system. I was using the gluster NFS server. On the XFS FAQ page, they have this: Q: Why doesn't NFS-exporting subdirectories of inode64-mounted filesystem work? The default fsid type encodes only 32-bit of the inode number for subdirectory exports. However, exporting the root of the filesystem
2015 Aug 04
2
xfs question
Hi, folks, CentOS 6.6 (well, just updated with CR). I have some xfs filesystems on a RAID. They've been mounted with the option of defaults. Will it break the whole thing if I now change that to inode64, or was that something I needed to do when the fs was created, or is there some conversion I can run that won't break everything? mark
2015 Feb 27
1
Odd nfs mount problem [SOLVED]
m.roth at 5-cent.us wrote: > m.roth at 5-cent.us wrote: >> I'm exporting a directory, firewall's open on both machines (one CentOS >> 6.6, the other RHEL 6.6), it automounts on the exporting machine, but >> the >> other server, not so much. >> >> ls /mountpoint/directory eventually times out (directory being the NFS >> mount). mount -t nfs
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago upgraded to v9 > v10.4. I do not know if the upgrade is related to this new issue. We are seeing a new issue 'error=No space left on device' error
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi,Have you checked inode usage (df -i /lvbackups/brick ) ? Best Regards,Strahil Nikolov On Tuesday, May 2, 2023, 3:05 AM, brandon at thinkhuge.net wrote: Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago
2013 Oct 09
1
XFS quotas not working at all (seemingly)
Hi All, I have a very strange problem that I'm unable to pinpoint at the moment. For some reason I am simply unable to get xfs_quotas to report correctly on a freshly installed, fully patched CentOS 6 box. I have specified all the same options as on another machine which *is* reporting quota LABEL=TEST /exports/TEST xfs inode64,nobarrier,delaylog,usrquota,grpquota 0 0 xfs_quota -xc
2015 Nov 04
5
stale file handle issue [SOLVED]
*sigh* The answer is that the large exported filesystem is a very large XFS... and at least through CentOS 6, upstream has *never* fixed an NFS bug that I find, googling, being complained about in '09: it gags on inodes > 32bit (not sure if that's signed, or unsigned, but....). The answer was to either create, or find an unneeded directory with a < 32bit inode, rename the
2016 Oct 24
3
NFS help
On Fri, Oct 21, 2016 at 11:42 AM, <m.roth at 5-cent.us> wrote: > Larry Martell wrote: >> On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: >>> Larry Martell wrote: >>>> We have 1 system ruining Centos7 that is the NFS server. There are 50 >>>> external machines that FTP files to this server fairly continuously. >>>>
2015 Mar 02
2
NFS and inode64
Well, we got it working. However, the issue we're now worried about is users creating files and subdirectories. Do we need to worry, and if so, is there some way to reserve inodes < 32k table, other than creating tens of thousands of dummy files now? We don't want, a year or two down the road, for this system to be running, and suddenly everything's broken, because all lower inodes
2012 Oct 09
2
Mount options for NFS
We're experiencing problems with some legacy software when it comes to NFS access. Even though files are visible in a terminal and can be accessed with standard shell tools and vi, this software typically complains that the files are empty or not syntactically correct. The NFS filesystems in question are 8TB+ XFS filesystems mounted with
2015 Aug 04
2
xfs question
On 8/4/2015 12:47 PM, James A. Peltier wrote: > Some older 32-bit software will likely have problems addressing any content outside of the 2^32 bit inode range. You will be able to see it, but reading and writing said data will likely be problematic The 99% of software that just does open,read,write will be fine regardless of word size. NFS is the only broken thing I ran into (on CentOS 6
2012 Jun 11
3
centos 6.2 xfs + nfs space allocation
Centos 6.2 system with xfs filesystem. I'm sharing this filesystem using nfs. When I create a 10 gigabyte test file from a nfs client system : dd if=/dev/zero of=10Gtest bs=1M count=10000 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 74.827 s, 140 MB/s Output from 'ls -al ; du' during this test : -rw-r--r-- 1 root root 429170688 Jun 8 10:13 10Gtest
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris, here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4 You can also ask in ovirt-users for real-world settings.Test well before changing production!!! IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!! Best Regards,Strahil Nikolov? On Mon, Jun 5, 2023 at 13:55,
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
Chris: Whilst I don't know what is the issue nor the root cause of your issue with using GlusterFS with Proxmox, but I am going to guess that you might already know that Proxmox "natively" supports Ceph, which the Wikipedia article for it says that it is a distributed object storage system. Maybe that might work better with Proxmox? Hope this helps. Sorry that I wasn't able
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
We use qcow2 with libvirt based kvm on many small clusters and have found it to be exremely reliable though maybe not the fastest, though some of that is most of our storage is SATA SSDs in a software RAID1 config for each brick. What problems are you running into? You just mention 'problems' -wk On 6/1/23 8:42 AM, Christian Schoepplein wrote: > Hi, > > we'd like to use