similar to: NFS and inode64

Displaying 20 results from an estimated 20000 matches similar to: "NFS and inode64"

2015 Mar 02
0
NFS and inode64
On 3/2/2015 11:56 AM, m.roth at 5-cent.us wrote: > Well, we got it working. However, the issue we're now worried about is > users creating files and subdirectories. Do we need to worry, and if so, > is there some way to reserve inodes < 32k table, other than creating tens > of thousands of dummy files now? > > We don't want, a year or two down the road, for this system
2015 Aug 04
3
xfs question
John R Pierce wrote: > On 8/4/2015 7:14 AM, m.roth at 5-cent.us wrote: >> >> CentOS 6.6 (well, just updated with CR). I have some xfs filesystems >> on a RAID. They've been mounted with the option of defaults. Will it break >> the whole thing if I now change that to inode64, or was that something >> I needed to do when the fs was created, or is there some
2012 Mar 02
1
xfs, inode64, and NFS
we recently deployed some large XFS file systems with centos 6.2 used as NFS servers... I've had some reports of a problem similar to the one reported here... http://www.linuxquestions.org/questions/red-hat-31/xfs-inode64-nfs-export-no_subtree_check-and-stale-nfs-file-handle-message-855844/ these reports are somewhat vague (third indirectly reported via internal corporate channels from
2014 Sep 25
1
CentOS 7, xfs
Well, I've set up one of our new JetStors. xfs took *seconds* to put a filesystem on it. We're talking what df -h shows as 66TB. (Pardon me, my mind just SEGV'd on that statement....) Using bonnie++, I found that a) GPT and partitioning gave was insignificantly different than creating an xfs filesystem on a raw disk. I'm more comfortable with the partition, though.
2015 Nov 04
5
stale file handle issue [SOLVED]
*sigh* The answer is that the large exported filesystem is a very large XFS... and at least through CentOS 6, upstream has *never* fixed an NFS bug that I find, googling, being complained about in '09: it gags on inodes > 32bit (not sure if that's signed, or unsigned, but....). The answer was to either create, or find an unneeded directory with a < 32bit inode, rename the
2013 Jun 19
1
XFS inode64 NFS export on CentOS6
Hi, I am trying to get the most out of my 40TB xfs file system and I have noticed that the inode64 mount option gives me a roughly 30% performance increase (besides the other useful things). The problem is that I have to export the filesystem via NFS and I cannot seem to get this working with the current version of nfs-utils (1.2.3). The export option fsid=uuid cannot be used (the standard
2012 Mar 10
0
XFS inode64 and Gluster 3.2.5 NFS export
Hi, I've recently had dataloss on an XFS (inode64) glusterfs (3.2.5) NFS exported file system. I was using the gluster NFS server. On the XFS FAQ page, they have this: Q: Why doesn't NFS-exporting subdirectories of inode64-mounted filesystem work? The default fsid type encodes only 32-bit of the inode number for subdirectory exports. However, exporting the root of the filesystem
2015 Feb 27
1
Odd nfs mount problem [SOLVED]
m.roth at 5-cent.us wrote: > m.roth at 5-cent.us wrote: >> I'm exporting a directory, firewall's open on both machines (one CentOS >> 6.6, the other RHEL 6.6), it automounts on the exporting machine, but >> the >> other server, not so much. >> >> ls /mountpoint/directory eventually times out (directory being the NFS >> mount). mount -t nfs
2014 May 28
3
The state of xfs on CentOS 6?
We're looking at getting an HBR (that's a technical term, honkin' big RAID). What I'm considering is, rather than chopping it up into 14TB or 16TB filesystems, of using xfs for really big filesystems. The question that's come up is: what's the state of xfs on CentOS6? I've seen a number of older threads seeing problems with it - has that mostly been resolved? How does
2013 Oct 29
1
XFS, inode64, and remount
Hi all, I was recently poking more into the inode64 mount option for XFS filesystems. I seem to recall a comment that you could remount a filesystem with inode64, but then a colleague ran into issues where he did that but was still out of inodes. So, I did more research, and found this posting to the XFS list: http://oss.sgi.com/archives/xfs/2008-05/msg01409.html So for people checking the
2015 Aug 04
2
xfs question
On 8/4/2015 12:47 PM, James A. Peltier wrote: > Some older 32-bit software will likely have problems addressing any content outside of the 2^32 bit inode range. You will be able to see it, but reading and writing said data will likely be problematic The 99% of software that just does open,read,write will be fine regardless of word size. NFS is the only broken thing I ran into (on CentOS 6
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago upgraded to v9 > v10.4. I do not know if the upgrade is related to this new issue. We are seeing a new issue 'error=No space left on device' error
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi,Have you checked inode usage (df -i /lvbackups/brick ) ? Best Regards,Strahil Nikolov On Tuesday, May 2, 2023, 3:05 AM, brandon at thinkhuge.net wrote: Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi, we'd like to use glusterfs for Proxmox and virtual machines with qcow2 disk images. We have a three node glusterfs setup with one volume and Proxmox is attached and VMs are created, but after some time, and I think after much i/o is going on for a VM, the data inside the virtual machine gets corrupted. When I copy files from or to our glusterfs directly everything is OK, I've
2015 Aug 04
2
xfs question
Hi, folks, CentOS 6.6 (well, just updated with CR). I have some xfs filesystems on a RAID. They've been mounted with the option of defaults. Will it break the whole thing if I now change that to inode64, or was that something I needed to do when the fs was created, or is there some conversion I can run that won't break everything? mark
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris, here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4 You can also ask in ovirt-users for real-world settings.Test well before changing production!!! IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!! Best Regards,Strahil Nikolov? On Mon, Jun 5, 2023 at 13:55,
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
Chris: Whilst I don't know what is the issue nor the root cause of your issue with using GlusterFS with Proxmox, but I am going to guess that you might already know that Proxmox "natively" supports Ceph, which the Wikipedia article for it says that it is a distributed object storage system. Maybe that might work better with Proxmox? Hope this helps. Sorry that I wasn't able
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
We use qcow2 with libvirt based kvm on many small clusters and have found it to be exremely reliable though maybe not the fastest, though some of that is most of our storage is SATA SSDs in a software RAID1 config for each brick. What problems are you running into? You just mention 'problems' -wk On 6/1/23 8:42 AM, Christian Schoepplein wrote: > Hi, > > we'd like to use
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi everybody Regarding the issue with mount, usually I am using this systemd service to bring up the mount points: /etc/systemd/system/glusterfsmounts.service [Unit] Description=Glustermounting Requires=glusterd.service Wants=glusterd.service After=network.target network-online.target glusterd.service [Service] Type=simple RemainAfterExit=true ExecStartPre=/usr/sbin/gluster volume list
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
On oVirt / Redhat Virtualization, the following Gluster volumes settings are recommended to be applied (preferably at the creation of the volume) These settings are important for data reliability, ( Note that Replica 3 or Replica 2+1 is expected ) performance.quick-read=off performance.read-ahead=off performance.io-cache=off performance.low-prio-threads=32 network.remote-dio=enable