search for: afr

Displaying 20 results from an estimated 395 matches for "afr".

Did you mean: af
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got cluster configuration: volume afr-ns type cluster/afr subvolumes n1-ns n2-ns n3-ns option data-self-heal on option metadata-self-heal on option entry-self-heal on end-volume volume afr1 type cluster...
2010 Nov 11
1
Possible split-brain
...usterfs/primary Brick8: 192.168.253.2:/glusterfs/secondary The platform is not currently running production data and I have been testing the redundancy of the setup (pulling cables etc.). All my servers are now logging the following messages every 1 minute or so: [2010-11-11 14:18:49.636327] I [afr-common.c:672:afr_lookup_done] datastore-replicate-0: split brain detected during lookup of /. [2010-11-11 14:18:49.636388] I [afr-common.c:716:afr_lookup_done] datastore-replicate-0: background meta-data data self-heal triggered. path: / [2010-11-11 14:18:49.636863] E [afr-self-heal-metadata.c:524...
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
.../recovery.baklz4 getfattr: Removing leading '/' from absolute path names # file: srv/gluster_home/brick/romanoch/.mozilla/firefox/vzzqqxrm.default-1396429081309/sessionstore-backups/recovery.baklz4 security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000 trusted.afr.dirty=0x000000000000000000000000 trusted.bit-rot.version=0x020000000000000059df20a40006f989 trusted.gfid=0xda1c94b1643544b18d5b6f4654f60bf5 trusted.glusterfs.quota.48e9eea6-cda6-4e53-bb4a-72059debf4c2.contri.1=0x0000000000009a000000000000000001 trusted.pgfid.48e9eea6-cda6-4e53-bb4a-72059debf4c2=0x0...
2013 Feb 19
1
Problems running dbench on 3.3
...lients/client5/~dmtmp/PWRPNT/PCBENCHM.PPT failed for handle 10003 (No such file or directory) (610) ERROR: handle 10003 was not found, Child failed with status 1 And the logs are full of things like this (ignore the initial timestamp, that's from our logging): [2013-02-19 14:38:38.714493] E [afr-self-heal-common.c:2160:afr_self_heal_completion_cbk] 0-replicate0: background data missing-entry gfid self-heal failed on /clients/client5/~dmtmp/PM/MOVED.DOC, [2013-02-19 14:38:38.724494] E [afr-self-heal-common.c:2160:afr_self_heal_completion_cbk] 0-replicate0: background entry self-heal fail...
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
.... > > I've executed the command on all 3 nodes (Know is enougth only one) , after that the "heal" command report elements between 6 and 10 ... (sometime 6, sometime 8, sometime 10) Log on glustershd.log don't say anything : *[2017-07-20 09:58:46.573079] I [MSGID: 108026] [afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on e6dfd556-340b-4b76-b47b-7b6f5bd74327. sources=[0] 1 sinks=2* *[2017-07-20 09:59:22.995003] I [MSGID: 108026] [afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata...
2012 Feb 05
2
Would difference in size (and content) of a file on replicated bricks be healed?
...] 0-d1-client-2: returning as transport is already disconnected OR there are no frames (0 || 0) [2012-02-05 21:41:01.206241] D [client-handshake.c:179:client_start_ping] 0-d1-client-3: returning as transport is already disconnected OR there are no frames (0 || 0) [2012-02-05 21:41:03.279124] D [afr-self-heal-common.c:139:afr_sh_print_pending_matrix] 0-d1-replicate-0: pending_matrix: [ 0 0 0 0 ] [2012-02-05 21:41:03.279182] D [afr-self-heal-common.c:139:afr_sh_print_pending_matrix] 0-d1-replicate-0: pending_matrix: [ 0 0 0 0 ] [2012-02-05 21:41:03.279202] D [afr-self-heal-common.c:139:afr_...
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 02:20 PM, yayo (j) wrote: > Hi, > > Thank you for the answer and sorry for delay: > > 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > 1. What does the glustershd.log say on all 3 nodes when you run > the command? Does it complain anything about these files? > > >
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...blem between your mount and the brick process, leading to pending heals again after the heal gets completed, which is why the numbers are varying each time. You would need to check why that is the case. Hope this helps, Ravi > > /[2017-07-20 09:58:46.573079] I [MSGID: 108026] > [afr-self-heal-common.c:1254:afr_log_selfheal] > 0-engine-replicate-0: Completed data selfheal on > e6dfd556-340b-4b76-b47b-7b6f5bd74327. sources=[0] 1 sinks=2/ > /[2017-07-20 09:59:22.995003] I [MSGID: 108026] > [afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do] >...
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
...-client-1: Connected to 172.16.95.153:24009, attached to remote volume '/mnt/cloudbrick'. [2013-12-03 05:42:32.790884] I [client-handshake.c:1423:client_setvolume_cbk] 0-glustervol-client-1: Server and Client lk-version numbers are not same, reopening the fds [2013-12-03 05:42:32.791003] I [afr-common.c:3685:afr_notify] 0-glustervol-replicate-0: Subvolume 'glustervol-client-1' came back up; going online. [2013-12-03 05:42:32.791161] I [client-handshake.c:453:client_set_lk_version_cbk] 0-glustervol-client-1: Server lk version = 1 [2013-12-03 05:42:32.795103] E [afr-self-heal-data.c...
2009 Jan 07
12
glusterfs alternative ? :P
I know that this is not the appropriate place :). You know someone can alternative to gluserfs ?:) -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20090107/63b68a0d/attachment.html>
2017 Jul 20
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi, Thank you for the answer and sorry for delay: 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: 1. What does the glustershd.log say on all 3 nodes when you run the > command? Does it complain anything about these files? > No, glustershd.log is clean, no extra log after command on all 3 nodes > 2. Are these 12 files also present in the 3rd data brick?
2023 Feb 07
1
File\Directory not healing
...help me with a healing problem. I have one file which didn't self heal. it looks to be a problem with a directory in the path as one node says it's dirty. I have a replica volume with arbiter This is what the 3 nodes say. One brick on each Node1 getfattr -d -m . -e hex /path/to/dir | grep afr getfattr: Removing leading '/' from absolute path names trusted.afr.volume-client-2=0x000000000000000000000001 trusted.afr.dirty=0x000000000000000000000000 Node2 getfattr -d -m . -e hex /path/to/dir | grep afr getfattr: Removing leading '/' from absolute path names trusted.afr.volu...
2013 Jan 07
0
access a file on one node, split brain, while it's normal on another node
...we can access the same file on another node. At the same time, if re-mount the volume in exception node, access the same file is ok. The glusterfs has cached some information? This case has happened more than one. The log is as following when split brain. [2013-01-07 09:57:29.554505] W [afr-common.c:931:afr_detect_self_heal_by_lookup_status] 0-gfs1-replicate-5: split brain detected during lookup of /XMTEXT/gfs1_000/000/000/095. [2013-01-07 09:57:29.554566] I [afr-common.c:1039:afr_launch_self_heal] 0-gfs1-replicate-5: background data gfid self-heal triggered. path: /XMTEXT/gfs1_000/...
2008 Sep 05
8
Gluster update | need your support
Dear Members, Even though Gluster team is growing at a steady phase, our aggressive development schedule out phases our resources. We need to expand and also maintain a 1:1 developer / QA engineer ratio. Our major development focus in the next 8 months will be towards: * Large scale regression tests (24/7/365) * Web based monitoring and management * Hot upgrade/add/remove of storage nodes
2017 Nov 17
2
?==?utf-8?q? Help with reconnecting a faulty brick
..., 2017 13:07 CET, Ravishankar N <ravishankar at redhat.com> a ?crit: > On 11/16/2017 12:54 PM, Daniel Berteaud wrote: > > Any way in this situation to check which file will be healed from > > which brick before reconnecting ? Using some getfattr tricks ? > Yes, there are afr xattrs that determine the heal direction for each > file. The good copy will have non-zero trusted.afr* xattrs that blame > the bad one and heal will happen from good to bad.? If both bricks have > attrs blaming the other, then the file is in split-brain. Thanks. So, say I have a file...
2012 Mar 12
0
Data consistency with Gluster 3.2.5
...nded. The problem I am having is when we switch live traffic to nodes in the cluster, they almost immediately get out of sync. The issue seems to be with cache files that are read/written a lot. Here is an excerpt pointing to issues with our OpenX banner cache: [2012-02-25 18:53:04.198326] E [afr-self-heal-common.c:2074:afr_self_heal_completion_cbk] 0-web-pub-replicate-0: background meta-data data missing-entry self-heal failed on /cust/site1/www/openx/var/cache/deliverycache_f8e7a8862cb80b4933c58acdf65aaef5.php [2012-02-25 18:53:04.199191] W [afr-common.c:1121:afr_conflicting_iattrs]...
2008 Jun 11
1
software raid performance
Are there known performance issues with using glusterfs on software raid? I've been playing with a variety of configs (AFR, AFR with Unify) on a two server setup. Everything seems to work well, but performance (creating files, reading files, appending to files) is very slow. Using the same configs on two non-software raid machines shows significant performance increases. Before I go a undo the software raid on thes...
2024 Jun 26
1
Confusion supreme
...machines the nodes are consistently named client-2: zephyrosaurus client-3: alvarezsaurus client-4: nanosaurus This is normal. It was the second time that a brick was removed, so client-0 and client-1 are gone. So the problem is the file attibutes themselves. And there I see things like trusted.afr.gv0-client-0=0x000000000000000000000000 trusted.afr.gv0-client-1=0x000000000000000000000ab0 trusted.afr.gv0-client-3=0x000000000000000000000000 trusted.afr.gv0-client-4=0x000000000000000000000000 and trusted.afr.gv0-client-3=0x000000000000000000000000 trusted.afr.gv0-client-4=0x000000000000000000...