similar to: File\Directory not healing

Displaying 20 results from an estimated 2000 matches similar to: "File\Directory not healing"

2023 Feb 14
1
File\Directory not healing
I guess you didn't receive my last e-mail. Use getfattr and identify if the gfid mismatch. If yes, move away the mismatched one. In order a dir to heal, you have to fix all files inside it before it can be healed. Best Regards, Strahil Nikolov ? ???????, 14 ???????? 2023 ?., 14:04:31 ?. ???????+2, David Dolan <daithidolan at gmail.com> ??????: I've touched the directory one
2023 Feb 07
1
File\Directory not healing
Hi All. Hoping you can help me with a healing problem. I have one file which didn't self heal. it looks to be a problem with a directory in the path as one node says it's dirty. I have a replica volume with arbiter This is what the 3 nodes say. One brick on each Node1 getfattr -d -m . -e hex /path/to/dir | grep afr getfattr: Removing leading '/' from absolute path names
2024 Jan 27
1
Upgrade 10.4 -> 11.1 making problems
You don't need to mount it. Like this : # getfattr -d -e hex -m. /path/to/brick/.glusterfs/00/46/00462be8-3e61-4931-8bda-dae1645c639e # file: 00/46/00462be8-3e61-4931-8bda-dae1645c639e trusted.gfid=0x00462be83e6149318bdadae1645c639e trusted.gfid2path.05fcbdafdeea18ab=0x30326333373930632d386637622d346436652d393464362d3936393132313930643131312f66696c656c6f636b696e672e7079
2024 Jan 25
1
Upgrade 10.4 -> 11.1 making problems
Good morning, hope i got it right... using: https://access.redhat.com/documentation/de-de/red_hat_gluster_storage/3.1/html/administration_guide/ch27s02 mount -t glusterfs -o aux-gfid-mount glusterpub1:/workdata /mnt/workdata gfid 1: getfattr -n trusted.glusterfs.pathinfo -e text /mnt/workdata/.gfid/faf59566-10f5-4ddd-8b0c-a87bc6a334fb getfattr: Removing leading '/' from absolute path
2024 Jan 24
1
Upgrade 10.4 -> 11.1 making problems
Hi, Can you find and check the files with gfids: 60465723-5dc0-4ebe-aced-9f2c12e52642faf59566-10f5-4ddd-8b0c-a87bc6a334fb Use 'getfattr -d -e hex -m. ' command from https://docs.gluster.org/en/main/Troubleshooting/resolving-splitbrain/#analysis-of-the-output . Best Regards,Strahil Nikolov On Sat, Jan 20, 2024 at 9:44, Hu Bert<revirii at googlemail.com> wrote: Good morning,
2024 Jul 22
1
Confusion supreme
Hi Zenon, First step would be to ensure that all clients are connected to all bricks - this will reduce the chance of new problems. For some reason there are problems with the broken node. Did you reduce the replica to 2 before reinstalling the broken node and re-adding it to the TSP ? Try to get the attributes and the blames of a few files.The following article (check all 3 parts) could help you
2019 Dec 20
1
GFS performance under heavy traffic
Hi David, Also consider using the mount option to specify backup server via 'backupvolfile-server=server2:server3' (you can define more but I don't thing replica volumes greater that 3 are usefull (maybe in some special cases). In such way, when the primary is lost, your client can reach a backup one without disruption. P.S.: Client may 'hang' - if the primary server got
2024 Jan 20
1
Upgrade 10.4 -> 11.1 making problems
Good morning, thx Gilberto, did the first three (set to WARNING), but the last one doesn't work. Anyway, with setting these three some new messages appear: [2024-01-20 07:23:58.561106 +0000] W [MSGID: 114061] [client-common.c:796:client_pre_lk_v2] 0-workdata-client-11: remote_fd is -1. EBADFD [{gfid=faf59566-10f5-4ddd-8b0c-a87bc6a334fb}, {errno=77}, {error=File descriptor in bad state}]
2024 Jun 26
1
Confusion supreme
I should add that in /var/lib/glusterd/vols/gv0/gv0-shd.vol and in all other configs in /var/lib/glusterd/ on all three machines the nodes are consistently named client-2: zephyrosaurus client-3: alvarezsaurus client-4: nanosaurus This is normal. It was the second time that a brick was removed, so client-0 and client-1 are gone. So the problem is the file attibutes themselves. And there I see
2024 Feb 18
1
Graceful shutdown doesn't stop all Gluster processes
Well, you prepare the host for shutdown, right ? So why don't you setup systemd to start the container and shut it down before the bricks ? Best Regards, Strahil Nikolov ? ?????, 16 ???????? 2024 ?. ? 18:48:36 ?. ???????+2, Anant Saraswat <anant.saraswat at techblue.co.uk> ??????: Hi Strahil, Yes, we mount the fuse to the physical host and then use bind mount to
2024 Feb 16
1
Graceful shutdown doesn't stop all Gluster processes
Hi Strahil, Yes, we mount the fuse to the physical host and then use bind mount to provide access to the container. The same physical host also runs the gluster server. Therefore, when we stop gluster using 'stop-all-gluster-processes.sh' on the physical host, it kills the fuse mount and impacts containers accessing this volume via bind. Thanks, Anant ________________________________
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
After force the add-brick gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3 force volume add-brick: success pve01:~# gluster volume info Volume Name: VMS Type: Distributed-Replicate Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1:
2024 Feb 26
1
Graceful shutdown doesn't stop all Gluster processes
Hi Strahil, In our setup, the Gluster brick comes from an iSCSI SAN storage and is then used as a brick on the Gluster server. To extend the brick, we stop the Gluster server, extend the logical volume (LV) on the SAN server, resize it on the host, mount the brick with the extended size, and finally start the Gluster server. Please let me know if this process can be optimized, I will be happy to
2019 Dec 24
1
GFS performance under heavy traffic
Hi David, On Dec 24, 2019 02:47, David Cunningham <dcunningham at voisonics.com> wrote: > > Hello, > > In testing we found that actually the GFS client having access to all 3 nodes made no difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node? It makes sense, as no data is being generated towards
2023 Jun 07
1
Geo replication procedure for DR
Dear Strahil, Thank you for the detailed command. So once you want to switch all traffic to the DR site in case of disaster one should first disable the read-only setting on the secondary volume on the slave site. What happens after when the master site is back online? What's the procedure there? I had the following question in my previous mail in this regard: "And once the primary
2023 Jun 07
1
How to find out data alignment for LVM thin volume brick
Dear Strahil, Thank you very much for pointing me to the RedHat documentation. I wasn't aware of it and it is much more detailed. I will have to read it carefully. Now as I have a single disk (no RAID) based on that documentation I understand that I should use a data alignment value of 256kB. Best regards, Mabi ------- Original Message ------- On Wednesday, June 7th, 2023 at 6:56 AM,
2020 Jul 01
3
Samba-4.10.4 strange behaviour
Hi Felix, thanks for the share. Sadly it doesn't work and I don't know how to start debugging this one. I tried your config (had to switch from domain member to standalone) but it's the same: [global] ??????? netbios name = yourName ??????? workgroup = yourWorkgroup ??????? realm = YourRealm ??????? log file = /var/log/samba/log.%m ??????? max log size = 50 ??????? security = ads
2018 Apr 08
1
Wiki update
Hello Community, my name is Strahil Nikolov (hunter86_bg) and I would like to update the following wiki page . In section "Create the New Initramfs or Initrd" there should be an additional line for CentOS7: mount --bind /run /mnt/sysimage/run The 'run' directory is needed especially if you need to start the multipathd.service before recreating the initramfs ('/' is on
2020 Jul 02
1
Samba-4.10.4 strange behaviour
Hi Rowland, I ment thay I removed some extra options from Felix settings - as they are for Samba in DC , and my setup is a Stand-alone samba. @Anoop, so this is expected ? I will check the documentation and follow up. Maybe I can optimize the Gluster documentation and update that? Best Regards, Strahil Nikolov ?? 1 ??? 2020 ?. 21:07:26 GMT+03:00, Rowland penny via samba <samba at
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
What's the volume structure right now? Best Regards, Strahil Nikolov On Wed, Nov 6, 2024 at 18:24, Gilberto Ferreira<gilberto.nunes32 at gmail.com> wrote: So I went ahead and do the force (is with you!) gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1 arbiter:/arbiter2 arbiter:/arbiter3 volume add-brick: failed: Multiple bricks of a replicate volume are present