similar to: xfs question

Displaying 20 results from an estimated 40000 matches similar to: "xfs question"

2015 Aug 04
3
xfs question
John R Pierce wrote: > On 8/4/2015 7:14 AM, m.roth at 5-cent.us wrote: >> >> CentOS 6.6 (well, just updated with CR). I have some xfs filesystems >> on a RAID. They've been mounted with the option of defaults. Will it break >> the whole thing if I now change that to inode64, or was that something >> I needed to do when the fs was created, or is there some
2013 Oct 29
1
XFS, inode64, and remount
Hi all, I was recently poking more into the inode64 mount option for XFS filesystems. I seem to recall a comment that you could remount a filesystem with inode64, but then a colleague ran into issues where he did that but was still out of inodes. So, I did more research, and found this posting to the XFS list: http://oss.sgi.com/archives/xfs/2008-05/msg01409.html So for people checking the
2014 Jan 21
2
XFS : Taking the plunge
Hi All, I have been trying out XFS given it is going to be the file system of choice from upstream in el7. Starting with an Adaptec ASR71605 populated with sixteen 4TB WD enterprise hard drives. The version of OS is 6.4 x86_64 and has 64G of RAM. This next part was not well researched as I had a colleague bothering me late on Xmas Eve that he needed 14 TB immediately to move data to from an
2015 Nov 04
5
stale file handle issue [SOLVED]
*sigh* The answer is that the large exported filesystem is a very large XFS... and at least through CentOS 6, upstream has *never* fixed an NFS bug that I find, googling, being complained about in '09: it gags on inodes > 32bit (not sure if that's signed, or unsigned, but....). The answer was to either create, or find an unneeded directory with a < 32bit inode, rename the
2015 Aug 04
0
xfs question
----- Original Message ----- | John R Pierce wrote: | > On 8/4/2015 7:14 AM, m.roth at 5-cent.us wrote: | >> | >> CentOS 6.6 (well, just updated with CR). I have some xfs filesystems | >> on a RAID. They've been mounted with the option of defaults. Will it break | >> the whole thing if I now change that to inode64, or was that something | >> I needed to do
2012 Mar 02
1
xfs, inode64, and NFS
we recently deployed some large XFS file systems with centos 6.2 used as NFS servers... I've had some reports of a problem similar to the one reported here... http://www.linuxquestions.org/questions/red-hat-31/xfs-inode64-nfs-export-no_subtree_check-and-stale-nfs-file-handle-message-855844/ these reports are somewhat vague (third indirectly reported via internal corporate channels from
2014 Sep 25
1
CentOS 7, xfs
Well, I've set up one of our new JetStors. xfs took *seconds* to put a filesystem on it. We're talking what df -h shows as 66TB. (Pardon me, my mind just SEGV'd on that statement....) Using bonnie++, I found that a) GPT and partitioning gave was insignificantly different than creating an xfs filesystem on a raw disk. I'm more comfortable with the partition, though.
2016 Oct 24
3
NFS help
On Fri, Oct 21, 2016 at 11:42 AM, <m.roth at 5-cent.us> wrote: > Larry Martell wrote: >> On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: >>> Larry Martell wrote: >>>> We have 1 system ruining Centos7 that is the NFS server. There are 50 >>>> external machines that FTP files to this server fairly continuously. >>>>
2015 Feb 27
1
Odd nfs mount problem [SOLVED]
m.roth at 5-cent.us wrote: > m.roth at 5-cent.us wrote: >> I'm exporting a directory, firewall's open on both machines (one CentOS >> 6.6, the other RHEL 6.6), it automounts on the exporting machine, but >> the >> other server, not so much. >> >> ls /mountpoint/directory eventually times out (directory being the NFS >> mount). mount -t nfs
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago upgraded to v9 > v10.4. I do not know if the upgrade is related to this new issue. We are seeing a new issue 'error=No space left on device' error
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi,Have you checked inode usage (df -i /lvbackups/brick ) ? Best Regards,Strahil Nikolov On Tuesday, May 2, 2023, 3:05 AM, brandon at thinkhuge.net wrote: Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago
2013 Jun 19
1
XFS inode64 NFS export on CentOS6
Hi, I am trying to get the most out of my 40TB xfs file system and I have noticed that the inode64 mount option gives me a roughly 30% performance increase (besides the other useful things). The problem is that I have to export the filesystem via NFS and I cannot seem to get this working with the current version of nfs-utils (1.2.3). The export option fsid=uuid cannot be used (the standard
2016 Oct 21
3
NFS help
On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: > Larry Martell wrote: >> We have 1 system ruining Centos7 that is the NFS server. There are 50 >> external machines that FTP files to this server fairly continuously. >> >> We have another system running Centos6 that mounts the partition the files >> are FTP-ed to using NFS. > <snip>
2015 Feb 27
2
Odd nfs mount problem
I'm exporting a directory, firewall's open on both machines (one CentOS 6.6, the other RHEL 6.6), it automounts on the exporting machine, but the other server, not so much. ls /mountpoint/directory eventually times out (directory being the NFS mount). mount -t nfs server:/location/being/exported /mnt works... but an immediate ls /mnt gives me stale file handle. The twist on this: the
2012 Jun 11
3
centos 6.2 xfs + nfs space allocation
Centos 6.2 system with xfs filesystem. I'm sharing this filesystem using nfs. When I create a 10 gigabyte test file from a nfs client system : dd if=/dev/zero of=10Gtest bs=1M count=10000 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 74.827 s, 140 MB/s Output from 'ls -al ; du' during this test : -rw-r--r-- 1 root root 429170688 Jun 8 10:13 10Gtest
2014 May 28
3
The state of xfs on CentOS 6?
We're looking at getting an HBR (that's a technical term, honkin' big RAID). What I'm considering is, rather than chopping it up into 14TB or 16TB filesystems, of using xfs for really big filesystems. The question that's come up is: what's the state of xfs on CentOS6? I've seen a number of older threads seeing problems with it - has that mostly been resolved? How does
2015 Nov 09
2
Rsync and differential Backups
On 11/9/2015 11:34 AM, Valeri Galtsev wrote: > I wonder how filesystem behaves when almost every file has some 400 hard > links to it. (thinking in terms of a year worth of daily backups). XFS handles this fine. I have a backuppc storage pool with backups of 27 servers going back a year... now, I just have 30 days of incrementals, and 12 months of fulls, but in backuppc's
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi, we'd like to use glusterfs for Proxmox and virtual machines with qcow2 disk images. We have a three node glusterfs setup with one volume and Proxmox is attached and VMs are created, but after some time, and I think after much i/o is going on for a VM, the data inside the virtual machine gets corrupted. When I copy files from or to our glusterfs directly everything is OK, I've
2015 Aug 04
0
xfs question
On 8/4/2015 7:14 AM, m.roth at 5-cent.us wrote: > Hi, folks, > > CentOS 6.6 (well, just updated with CR). I have some xfs filesystems on > a RAID. They've been mounted with the option of defaults. Will it break > the whole thing if I now change that to inode64, or was that something > I needed to do when the fs was created, or is there some conversion I > can run that
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris, here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4 You can also ask in ovirt-users for real-world settings.Test well before changing production!!! IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!! Best Regards,Strahil Nikolov? On Mon, Jun 5, 2023 at 13:55,