similar to: XFS, inode64, and remount

Displaying 20 results from an estimated 7000 matches similar to: "XFS, inode64, and remount"

2014 Jan 21
2
XFS : Taking the plunge
Hi All, I have been trying out XFS given it is going to be the file system of choice from upstream in el7. Starting with an Adaptec ASR71605 populated with sixteen 4TB WD enterprise hard drives. The version of OS is 6.4 x86_64 and has 64G of RAM. This next part was not well researched as I had a colleague bothering me late on Xmas Eve that he needed 14 TB immediately to move data to from an
2013 Oct 09
1
XFS quotas not working at all (seemingly)
Hi All, I have a very strange problem that I'm unable to pinpoint at the moment. For some reason I am simply unable to get xfs_quotas to report correctly on a freshly installed, fully patched CentOS 6 box. I have specified all the same options as on another machine which *is* reporting quota LABEL=TEST /exports/TEST xfs inode64,nobarrier,delaylog,usrquota,grpquota 0 0 xfs_quota -xc
2015 Aug 04
3
xfs question
John R Pierce wrote: > On 8/4/2015 7:14 AM, m.roth at 5-cent.us wrote: >> >> CentOS 6.6 (well, just updated with CR). I have some xfs filesystems >> on a RAID. They've been mounted with the option of defaults. Will it break >> the whole thing if I now change that to inode64, or was that something >> I needed to do when the fs was created, or is there some
2012 Mar 02
1
xfs, inode64, and NFS
we recently deployed some large XFS file systems with centos 6.2 used as NFS servers... I've had some reports of a problem similar to the one reported here... http://www.linuxquestions.org/questions/red-hat-31/xfs-inode64-nfs-export-no_subtree_check-and-stale-nfs-file-handle-message-855844/ these reports are somewhat vague (third indirectly reported via internal corporate channels from
2013 Jun 19
1
XFS inode64 NFS export on CentOS6
Hi, I am trying to get the most out of my 40TB xfs file system and I have noticed that the inode64 mount option gives me a roughly 30% performance increase (besides the other useful things). The problem is that I have to export the filesystem via NFS and I cannot seem to get this working with the current version of nfs-utils (1.2.3). The export option fsid=uuid cannot be used (the standard
2015 Aug 04
2
xfs question
Hi, folks, CentOS 6.6 (well, just updated with CR). I have some xfs filesystems on a RAID. They've been mounted with the option of defaults. Will it break the whole thing if I now change that to inode64, or was that something I needed to do when the fs was created, or is there some conversion I can run that won't break everything? mark
2012 Mar 10
0
XFS inode64 and Gluster 3.2.5 NFS export
Hi, I've recently had dataloss on an XFS (inode64) glusterfs (3.2.5) NFS exported file system. I was using the gluster NFS server. On the XFS FAQ page, they have this: Q: Why doesn't NFS-exporting subdirectories of inode64-mounted filesystem work? The default fsid type encodes only 32-bit of the inode number for subdirectory exports. However, exporting the root of the filesystem
2015 Aug 04
0
xfs question
----- Original Message ----- | John R Pierce wrote: | > On 8/4/2015 7:14 AM, m.roth at 5-cent.us wrote: | >> | >> CentOS 6.6 (well, just updated with CR). I have some xfs filesystems | >> on a RAID. They've been mounted with the option of defaults. Will it break | >> the whole thing if I now change that to inode64, or was that something | >> I needed to do
2013 Sep 26
1
to lvm or not to lvm - why/when to use lvm
Hi, I was wondering, why/when it is useful or when should I avoid to use LVM. I think the big advantage of LVMing is if you modify (rezising, ...) disk and filesystem layouts "a lot". Are there any real pros or cons for following situations regarding e.g. management and speed? e.g.: I do have a server system raid for which the disk layout will not change; e.g. /var /usr /home will
2014 Sep 25
1
CentOS 7, xfs
Well, I've set up one of our new JetStors. xfs took *seconds* to put a filesystem on it. We're talking what df -h shows as 66TB. (Pardon me, my mind just SEGV'd on that statement....) Using bonnie++, I found that a) GPT and partitioning gave was insignificantly different than creating an xfs filesystem on a raw disk. I'm more comfortable with the partition, though.
2014 Jul 01
3
corruption of in-memory data detected (xfs)
Hi All, I am having an issue with an XFS filesystem shutting down under high load with very many small files. Basically, I have around 3.5 - 4 million files on this filesystem. New files are being written to the FS all the time, until I get to 9-11 mln small files (35k on average). at some point I get the following in dmesg: [2870477.695512] Filesystem "sda5": XFS internal error
2013 Oct 27
2
page allocation failure
CentOS 6.4 /var/log/messages-20131013:Oct 9 03:16:36 vixen kernel: EMT: page allocation failure. order:4, mode:0xd0 /var/log/messages-20131020:Oct 14 13:15:11 vixen kernel: httpd: page allocation failure. order:2, mode:0x20 /var/log/messages-20131020:Oct 14 13:15:11 vixen kernel: httpd: page allocation failure. order:2, mode:0x20 /var/log/messages-20131027:Oct 20 16:00:47 vixen kernel: sshd:
2012 Jun 11
3
centos 6.2 xfs + nfs space allocation
Centos 6.2 system with xfs filesystem. I'm sharing this filesystem using nfs. When I create a 10 gigabyte test file from a nfs client system : dd if=/dev/zero of=10Gtest bs=1M count=10000 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 74.827 s, 140 MB/s Output from 'ls -al ; du' during this test : -rw-r--r-- 1 root root 429170688 Jun 8 10:13 10Gtest
2015 Nov 10
1
Rsync and differential Backups
On Mon, November 9, 2015 7:52 pm, Keith Keller wrote: > On 2015-11-09, John R Pierce <pierce at hogranch.com> wrote: >> >> XFS handles this fine. I have a backuppc storage pool with backups of >> 27 servers going back a year... now, I just have 30 days of >> incrementals, and 12 months of fulls, > > I'm sure you know this already, but for those who may
2001 Jul 27
2
Strane remount behaviour with ext3-2.4-0.9.4
Following the announcement on lkml, I have started using ext3 on one of my servers. Since the server in question is a farily security-sensitive box, my /usr partition is mounted read only except when I remount rw to install packages. I converted this partition to run ext3 with the mount options "nodev,ro,data=writeback,defaults" figuring that when I need to install new packages etc,
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago upgraded to v9 > v10.4. I do not know if the upgrade is related to this new issue. We are seeing a new issue 'error=No space left on device' error
2008 Jan 07
1
Multiple mount instead of remount?
I'm having issues trying to remount any shares using samba. Super Short version mount -o remount /some_windows_share creates a duplicate mount instead of properly remounting. Full details... I have many shares served off of NT4 boxes, mounted via samba on a linux box (RHEL 4). Since long dormant (>12 hour) shares from one server in particular always have problems for a few seconds when
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi,Have you checked inode usage (df -i /lvbackups/brick ) ? Best Regards,Strahil Nikolov On Tuesday, May 2, 2023, 3:05 AM, brandon at thinkhuge.net wrote: Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago
2013 Feb 20
2
NFS mount auto remount in case of problems.
Hi All. I have a setup in which I have two servers serving nfs share. The nfs service is made highly available with pacemaker. When the primary server goes down the secondary starts nfs service. Service IP is floating between servers but they have NO "shared" storage/filesystem so NFS state/connection information in case of failover is lost. I have two clients. When the failover from
2008 Jan 09
1
mount -o remount /mnt/samba creates duplicate mount
This is a repost, since I'd really like to get some info about whats going on. when using, "mount -o remount" on an linux box, I get a duplicate mount instead of a proper remount. /proc/mounts backs me up: the system really does have multiple mounts in the same place, of the same drive. The version of samba installed is 3.0.10, the distro is RedHat Enterprise Linux 4, kernel is