similar to: stale file handle issue [SOLVED]

Displaying 20 results from an estimated 20000 matches similar to: "stale file handle issue [SOLVED]"

2015 Nov 04
0
stale file handle issue [SOLVED]
On Wed, November 4, 2015 11:59 am, m.roth at 5-cent.us wrote: > *sigh* > > The answer is that the large exported filesystem is a very large XFS... > and at least through CentOS 6, upstream has *never* fixed an NFS bug that > I find, googling, being complained about in '09: it gags on inodes > 32bit > (not sure if that's signed, or unsigned, but....). Mark, are you
2015 Aug 04
3
xfs question
John R Pierce wrote: > On 8/4/2015 7:14 AM, m.roth at 5-cent.us wrote: >> >> CentOS 6.6 (well, just updated with CR). I have some xfs filesystems >> on a RAID. They've been mounted with the option of defaults. Will it break >> the whole thing if I now change that to inode64, or was that something >> I needed to do when the fs was created, or is there some
2015 Aug 04
2
xfs question
Hi, folks, CentOS 6.6 (well, just updated with CR). I have some xfs filesystems on a RAID. They've been mounted with the option of defaults. Will it break the whole thing if I now change that to inode64, or was that something I needed to do when the fs was created, or is there some conversion I can run that won't break everything? mark
2013 Jul 03
1
Recommended filesystem for GlusterFS bricks.
Hi, Which is the recommended filesystem to be used for the bricks in glusterFS. ?? XFS/EXT3/EXT4 etc .???? Thanks & Regards, Bobby Jacob Senior Technical Systems Engineer | eGroup P SAVE TREES. Please don't print this e-mail unless you really need to. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2014 Jan 21
2
XFS : Taking the plunge
Hi All, I have been trying out XFS given it is going to be the file system of choice from upstream in el7. Starting with an Adaptec ASR71605 populated with sixteen 4TB WD enterprise hard drives. The version of OS is 6.4 x86_64 and has 64G of RAM. This next part was not well researched as I had a colleague bothering me late on Xmas Eve that he needed 14 TB immediately to move data to from an
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago upgraded to v9 > v10.4. I do not know if the upgrade is related to this new issue. We are seeing a new issue 'error=No space left on device' error
2015 Feb 27
1
Odd nfs mount problem [SOLVED]
m.roth at 5-cent.us wrote: > m.roth at 5-cent.us wrote: >> I'm exporting a directory, firewall's open on both machines (one CentOS >> 6.6, the other RHEL 6.6), it automounts on the exporting machine, but >> the >> other server, not so much. >> >> ls /mountpoint/directory eventually times out (directory being the NFS >> mount). mount -t nfs
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi,Have you checked inode usage (df -i /lvbackups/brick ) ? Best Regards,Strahil Nikolov On Tuesday, May 2, 2023, 3:05 AM, brandon at thinkhuge.net wrote: Hi Gluster users, We are seeing 'error=No space left on device' issue and hoping someone might could advise? We are using a 12 node glusterfs v10.4 distributed vsftpd backup cluster for years (not new) and recently 2 weeks ago
2016 Oct 21
3
NFS help
On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: > Larry Martell wrote: >> We have 1 system ruining Centos7 that is the NFS server. There are 50 >> external machines that FTP files to this server fairly continuously. >> >> We have another system running Centos6 that mounts the partition the files >> are FTP-ed to using NFS. > <snip>
2016 Oct 24
3
NFS help
On Fri, Oct 21, 2016 at 11:42 AM, <m.roth at 5-cent.us> wrote: > Larry Martell wrote: >> On Fri, Oct 21, 2016 at 11:21 AM, <m.roth at 5-cent.us> wrote: >>> Larry Martell wrote: >>>> We have 1 system ruining Centos7 that is the NFS server. There are 50 >>>> external machines that FTP files to this server fairly continuously. >>>>
2013 Oct 29
1
XFS, inode64, and remount
Hi all, I was recently poking more into the inode64 mount option for XFS filesystems. I seem to recall a comment that you could remount a filesystem with inode64, but then a colleague ran into issues where he did that but was still out of inodes. So, I did more research, and found this posting to the XFS list: http://oss.sgi.com/archives/xfs/2008-05/msg01409.html So for people checking the
2015 Nov 04
0
stale file handle issue [SOLVED]
On Wed, Nov 04, 2015 at 12:59:14PM -0500, m.roth at 5-cent.us wrote: > The answer is that the large exported filesystem is a very large XFS... > and at least through CentOS 6, upstream has *never* fixed an NFS bug that > I find, googling, being complained about in '09: it gags on inodes > 32bit > (not sure if that's signed, or unsigned, but....). Out of curiosity, was this
2015 Nov 09
2
Rsync and differential Backups
On 11/9/2015 11:34 AM, Valeri Galtsev wrote: > I wonder how filesystem behaves when almost every file has some 400 hard > links to it. (thinking in terms of a year worth of daily backups). XFS handles this fine. I have a backuppc storage pool with backups of 27 servers going back a year... now, I just have 30 days of incrementals, and 12 months of fulls, but in backuppc's
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi, we'd like to use glusterfs for Proxmox and virtual machines with qcow2 disk images. We have a three node glusterfs setup with one volume and Proxmox is attached and VMs are created, but after some time, and I think after much i/o is going on for a VM, the data inside the virtual machine gets corrupted. When I copy files from or to our glusterfs directly everything is OK, I've
2016 Aug 09
4
ssh & ksh question
I need to run a report, source file on system 1, on system 2. I'd like to do this in one script, not have a second script to run it. Now cat script | ssh system2 works fine. But no matter what I've tried, it gags on ssh system2 <<EOF blah, blah EOF. Mostly, I have a multiline awk script in the script, with \ at the end of each line... *but* I think it's seeing "\n" as
2015 Mar 02
2
NFS and inode64
Well, we got it working. However, the issue we're now worried about is users creating files and subdirectories. Do we need to worry, and if so, is there some way to reserve inodes < 32k table, other than creating tens of thousands of dummy files now? We don't want, a year or two down the road, for this system to be running, and suddenly everything's broken, because all lower inodes
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris, here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4 You can also ask in ovirt-users for real-world settings.Test well before changing production!!! IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!! Best Regards,Strahil Nikolov? On Mon, Jun 5, 2023 at 13:55,
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
Chris: Whilst I don't know what is the issue nor the root cause of your issue with using GlusterFS with Proxmox, but I am going to guess that you might already know that Proxmox "natively" supports Ceph, which the Wikipedia article for it says that it is a distributed object storage system. Maybe that might work better with Proxmox? Hope this helps. Sorry that I wasn't able
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
We use qcow2 with libvirt based kvm on many small clusters and have found it to be exremely reliable though maybe not the fastest, though some of that is most of our storage is SATA SSDs in a software RAID1 config for each brick. What problems are you running into? You just mention 'problems' -wk On 6/1/23 8:42 AM, Christian Schoepplein wrote: > Hi, > > we'd like to use
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi everybody Regarding the issue with mount, usually I am using this systemd service to bring up the mount points: /etc/systemd/system/glusterfsmounts.service [Unit] Description=Glustermounting Requires=glusterd.service Wants=glusterd.service After=network.target network-online.target glusterd.service [Service] Type=simple RemainAfterExit=true ExecStartPre=/usr/sbin/gluster volume list