yayo (j)
2017-Jul-20 10:12 UTC
[Gluster-users] [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:> > Could you check if the self-heal daemon on all nodes is connected to the 3 > bricks? You will need to check the glustershd.log for that. > If it is not connected, try restarting the shd using `gluster volume start > engine force`, then launch the heal command like you did earlier and see if > heals happen. > >I've executed the command on all 3 nodes (Know is enougth only one) , after that the "heal" command report elements between 6 and 10 ... (sometime 6, sometime 8, sometime 10) Log on glustershd.log don't say anything : *[2017-07-20 09:58:46.573079] I [MSGID: 108026] [afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0: Completed data selfheal on e6dfd556-340b-4b76-b47b-7b6f5bd74327. sources=[0] 1 sinks=2* *[2017-07-20 09:59:22.995003] I [MSGID: 108026] [afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do] 0-engine-replicate-0: performing metadata selfheal on f05b9742-2771-484a-85fc-5b6974bcef81* *[2017-07-20 09:59:22.999372] I [MSGID: 108026] [afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0: Completed metadata selfheal on f05b9742-2771-484a-85fc-5b6974bcef81. sources=[0] 1 sinks=2*> If it doesn't, please provide the getfattr outputs of the 12 files from > all 3 nodes using `getfattr -d -m . -e hex */gluster/engine/brick/* > path-to-file` ? > >*NODE01:* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-1=0x000000000000000000000000* *trusted.afr.engine-client-2=0x000000120000000000000000* *trusted.bit-rot.version=0x090000000000000059647d5b000447e9* *trusted.gfid=0xe3565b5014954e5bae883bceca47b7d9* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-1=0x000000000000000000000000* *trusted.afr.engine-client-2=0x0000000e0000000000000000* *trusted.bit-rot.version=0x090000000000000059647d5b000447e9* *trusted.gfid=0x676067891f344c1586b8c0d05b07f187* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-1=0x000000000000000000000000* *trusted.afr.engine-client-2=0x000000550000000000000000* *trusted.bit-rot.version=0x090000000000000059647d5b000447e9* *trusted.gfid=0x8aa745646740403ead51f56d9ca5d7a7* *trusted.glusterfs.shard.block-size=0x0000000020000000* *trusted.glusterfs.shard.file-size=0x0000000c8000000000000000000000000000000000d4f2290000000000000000* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-1=0x000000000000000000000000* *trusted.afr.engine-client-2=0x000000070000000000000000* *trusted.bit-rot.version=0x090000000000000059647d5b000447e9* *trusted.gfid=0x4e33ac33dddb4e29b4a351770b81166a* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-1=0x000000000000000000000000* *trusted.afr.engine-client-2=0x000000000000000000000000* *trusted.bit-rot.version=0x0f0000000000000059647d5b000447e9* *trusted.gfid=0x2581cb9ac2b74bd9ac17a09bd2f001b3* *trusted.glusterfs.shard.block-size=0x0000000020000000* *trusted.glusterfs.shard.file-size=0x0000000000100000000000000000000000000000000008000000000000000000* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/__DIRECT_IO_TEST__* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-1=0x000000000000000000000000* *trusted.afr.engine-client-2=0x000000000000000000000000* *trusted.gfid=0xf05b97422771484a85fc5b6974bcef81* *trusted.glusterfs.shard.block-size=0x0000000020000000* *trusted.glusterfs.shard.file-size=0x0000000000000000000000000000000000000000000000000000000000000000* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-1=0x000000000000000000000000* *trusted.afr.engine-client-2=0x000000010000000000000000* *trusted.bit-rot.version=0x0f0000000000000059647d5b000447e9* *trusted.gfid=0xe6dfd556340b4b76b47b7b6f5bd74327* *trusted.glusterfs.shard.block-size=0x0000000020000000* *trusted.glusterfs.shard.file-size=0x0000000000100000000000000000000000000000000008000000000000000000* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.64* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-1=0x000000000000000000000000* *trusted.afr.engine-client-2=0x0000000a0000000000000000* *trusted.bit-rot.version=0x090000000000000059647d5b000447e9* *trusted.gfid=0x9ef88647cfe64a35a38ca5173c9e8fc0* *NODE02:* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-0=0x000000000000000000000000* *trusted.afr.engine-client-2=0x0000001a0000000000000000* *trusted.bit-rot.version=0x08000000000000005965ede0000c352d* *trusted.gfid=0xe3565b5014954e5bae883bceca47b7d9* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48* *trusted.afr.dirty=0x000000010000000000000000* *trusted.afr.engine-client-0=0x000000000000000000000000* *trusted.afr.engine-client-2=0x0000000c0000000000000000* *trusted.bit-rot.version=0x08000000000000005965ede0000c352d* *trusted.gfid=0x676067891f344c1586b8c0d05b07f187* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-0=0x000000000000000000000000* *trusted.afr.engine-client-1=0x000000000000000000000000* *trusted.afr.engine-client-2=0x0000008e0000000000000000* *trusted.bit-rot.version=0x08000000000000005965ede0000c352d* *trusted.gfid=0x8aa745646740403ead51f56d9ca5d7a7* *trusted.glusterfs.shard.block-size=0x0000000020000000* *trusted.glusterfs.shard.file-size=0x0000000c8000000000000000000000000000000000d4f2290000000000000000* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-0=0x000000000000000000000000* *trusted.afr.engine-client-2=0x000000090000000000000000* *trusted.bit-rot.version=0x08000000000000005965ede0000c352d* *trusted.gfid=0x4e33ac33dddb4e29b4a351770b81166a* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-0=0x000000000000000000000000* *trusted.afr.engine-client-2=0x000000010000000000000000* *trusted.bit-rot.version=0x08000000000000005965ede0000c352d* *trusted.gfid=0x2581cb9ac2b74bd9ac17a09bd2f001b3* *trusted.glusterfs.shard.block-size=0x0000000020000000* *trusted.glusterfs.shard.file-size=0x0000000000100000000000000000000000000000000008000000000000000000* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/__DIRECT_IO_TEST__* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-0=0x000000000000000000000000* *trusted.afr.engine-client-2=0x000000000000000000000000* *trusted.gfid=0xf05b97422771484a85fc5b6974bcef81* *trusted.glusterfs.shard.block-size=0x0000000020000000* *trusted.glusterfs.shard.file-size=0x0000000000000000000000000000000000000000000000000000000000000000* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-0=0x000000000000000000000000* *trusted.afr.engine-client-2=0x000000020000000000000000* *trusted.bit-rot.version=0x08000000000000005965ede0000c352d* *trusted.gfid=0xe6dfd556340b4b76b47b7b6f5bd74327* *trusted.glusterfs.shard.block-size=0x0000000020000000* *trusted.glusterfs.shard.file-size=0x0000000000100000000000000000000000000000000008000000000000000000* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.64* *trusted.afr.dirty=0x000000000000000000000000* *trusted.afr.engine-client-0=0x000000000000000000000000* *trusted.afr.engine-client-2=0x000000120000000000000000* *trusted.bit-rot.version=0x08000000000000005965ede0000c352d* *trusted.gfid=0x9ef88647cfe64a35a38ca5173c9e8fc0* *NODE04:* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68* *security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000* *trusted.bit-rot.version=0x050000000000000059662c390006b836* *trusted.gfid=0xe3565b5014954e5bae883bceca47b7d9* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48* *security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000* *trusted.bit-rot.version=0x050000000000000059662c390006b836* *trusted.gfid=0x676067891f344c1586b8c0d05b07f187* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01* *security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000* *trusted.bit-rot.version=0x050000000000000059662c390006b836* *trusted.gfid=0x8aa745646740403ead51f56d9ca5d7a7* *trusted.glusterfs.shard.block-size=0x0000000020000000* *trusted.glusterfs.shard.file-size=0x0000000c8000000000000000000000000000000000d4f2290000000000000000* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60* *security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000* *trusted.bit-rot.version=0x050000000000000059662c390006b836* *trusted.gfid=0x4e33ac33dddb4e29b4a351770b81166a* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids* *security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000* *trusted.afr.dirty=0x000000000000000000000000* *trusted.bit-rot.version=0x050000000000000059662c390006b836* *trusted.gfid=0x2581cb9ac2b74bd9ac17a09bd2f001b3* *trusted.glusterfs.shard.block-size=0x0000000020000000* *trusted.glusterfs.shard.file-size=0x0000000000100000000000000000000000000000000008000000000000000000* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/__DIRECT_IO_TEST__* *security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000* *trusted.bit-rot.version=0x0200000000000000596484e20006237b* *trusted.gfid=0xf05b97422771484a85fc5b6974bcef81* *trusted.glusterfs.shard.block-size=0x0000000020000000* *trusted.glusterfs.shard.file-size=0x0000000000000000000000000000000000000000000000000000000000000000* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6* *security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000* *trusted.afr.dirty=0x000000000000000000000000* *trusted.bit-rot.version=0x050000000000000059662c390006b836* *trusted.gfid=0xe6dfd556340b4b76b47b7b6f5bd74327* *trusted.glusterfs.shard.block-size=0x0000000020000000* *trusted.glusterfs.shard.file-size=0x0000000000100000000000000000000000000000000008000000000000000000* *getfattr: Removing leading '/' from absolute path names* *# file: gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.64* *security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000* *trusted.bit-rot.version=0x050000000000000059662c390006b836* *trusted.gfid=0x9ef88647cfe64a35a38ca5173c9e8fc0* hum.... Is selinux the problem? but on node04 was disabled (AFTER GLUSTER JOIN, I hope to remember) ... You think I needs to relabel? how? *[root at node01 ~]# sestatus* *SELinux status: disabled* *[root at node02 ~]# sestatus* *SELinux status: disabled* *[root at node04 ~]# sestatus* *SELinux status: disabled* Thank you> Thanks, > Ravi > > > > >> 2. Are these 12 files also present in the 3rd data brick? >> > > I've checked right now: all files exists in all 3 nodes > > >> 3. Can you provide the output of `gluster volume info` for the this >> volume? >> > > > *Volume Name: engine* > *Type: Replicate* > *Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515* > *Status: Started* > *Snapshot Count: 0* > *Number of Bricks: 1 x 3 = 3* > *Transport-type: tcp* > *Bricks:* > *Brick1: node01:/gluster/engine/brick* > *Brick2: node02:/gluster/engine/brick* > *Brick3: node04:/gluster/engine/brick* > *Options Reconfigured:* > *nfs.disable: on* > *performance.readdir-ahead: on* > *transport.address-family: inet* > *storage.owner-uid: 36* > *performance.quick-read: off* > *performance.read-ahead: off* > *performance.io-cache: off* > *performance.stat-prefetch: off* > *performance.low-prio-threads: 32* > *network.remote-dio: off* > *cluster.eager-lock: enable* > *cluster.quorum-type: auto* > *cluster.server-quorum-type: server* > *cluster.data-self-heal-algorithm: full* > *cluster.locking-scheme: granular* > *cluster.shd-max-threads: 8* > *cluster.shd-wait-qlength: 10000* > *features.shard: on* > *user.cifs: off* > *storage.owner-gid: 36* > *features.shard-block-size: 512MB* > *network.ping-timeout: 30* > *performance.strict-o-direct: on* > *cluster.granular-entry-heal: on* > *auth.allow: ** > > server.allow-insecure: on > > > > > >> >> Some extra info: >>> >>> We have recently changed the gluster from: 2 (full repliacated) + 1 >>> arbiter to 3 full replicated cluster >>> >> >> Just curious, how did you do this? `remove-brick` of arbiter brick >> followed by an `add-brick` to increase to replica-3? >> >> > Yes > > > #gluster volume remove-brick engine replica 2 node03:/gluster/data/brick > force *(OK!)* > > #gluster volume heal engine info *(no entries!)* > > #gluster volume add-brick engine replica 3 node04:/gluster/engine/brick > *(OK!)* > > *After some minutes* > > [root at node01 ~]# gluster volume heal engine info > Brick node01:/gluster/engine/brick > Status: Connected > Number of entries: 0 > > Brick node02:/gluster/engine/brick > Status: Connected > Number of entries: 0 > > Brick node04:/gluster/engine/brick > Status: Connected > Number of entries: 0 > > > >> Thanks, >> Ravi >> > > Another extra info (I don't know if this can be the problem): Five days > ago A black out has suddenly shut down the networks switch (also gluster > network) of node 03 and 04 ... But I don't know this problem is in place > after this black out > > Thank you! > > >-- Linux User: 369739 http://counter.li.org -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170720/cd975f48/attachment.html>
Ravishankar N
2017-Jul-20 12:48 UTC
[Gluster-users] [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 03:42 PM, yayo (j) wrote:> > 2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > > Could you check if the self-heal daemon on all nodes is connected > to the 3 bricks? You will need to check the glustershd.log for that. > If it is not connected, try restarting the shd using `gluster > volume start engine force`, then launch the heal command like you > did earlier and see if heals happen. > > > I've executed the command on all 3 nodes (Know is enougth only one) , > after that the "heal" command report elements between 6 and 10 ... > (sometime 6, sometime 8, sometime 10) > > > Log on glustershd.log don't say anything :But it does say something. All these gfids of completed heals in the log below are the for the ones that you have given the getfattr output of. So what is likely happening is there is an intermittent connection problem between your mount and the brick process, leading to pending heals again after the heal gets completed, which is why the numbers are varying each time. You would need to check why that is the case. Hope this helps, Ravi> > /[2017-07-20 09:58:46.573079] I [MSGID: 108026] > [afr-self-heal-common.c:1254:afr_log_selfheal] > 0-engine-replicate-0: Completed data selfheal on > e6dfd556-340b-4b76-b47b-7b6f5bd74327. sources=[0] 1 sinks=2/ > /[2017-07-20 09:59:22.995003] I [MSGID: 108026] > [afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do] > 0-engine-replicate-0: performing metadata selfheal on > f05b9742-2771-484a-85fc-5b6974bcef81/ > /[2017-07-20 09:59:22.999372] I [MSGID: 108026] > [afr-self-heal-common.c:1254:afr_log_selfheal] > 0-engine-replicate-0: Completed metadata selfheal on > f05b9742-2771-484a-85fc-5b6974bcef81. sources=[0] 1 sinks=2/ > > > If it doesn't, please provide the getfattr outputs of the 12 files > from all 3 nodes using `getfattr -d -m . -e hex > //gluster/engine/brick//path-to-file` ? > > > */NODE01:/* > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-1=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x000000120000000000000000/ > /trusted.bit-rot.version=0x090000000000000059647d5b000447e9/ > /trusted.gfid=0xe3565b5014954e5bae883bceca47b7d9/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-1=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x0000000e0000000000000000/ > /trusted.bit-rot.version=0x090000000000000059647d5b000447e9/ > /trusted.gfid=0x676067891f344c1586b8c0d05b07f187/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-1=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x000000550000000000000000/ > /trusted.bit-rot.version=0x090000000000000059647d5b000447e9/ > /trusted.gfid=0x8aa745646740403ead51f56d9ca5d7a7/ > /trusted.glusterfs.shard.block-size=0x0000000020000000/ > /trusted.glusterfs.shard.file-size=0x0000000c8000000000000000000000000000000000d4f2290000000000000000/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-1=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x000000070000000000000000/ > /trusted.bit-rot.version=0x090000000000000059647d5b000447e9/ > /trusted.gfid=0x4e33ac33dddb4e29b4a351770b81166a/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-1=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x000000000000000000000000/ > /trusted.bit-rot.version=0x0f0000000000000059647d5b000447e9/ > /trusted.gfid=0x2581cb9ac2b74bd9ac17a09bd2f001b3/ > /trusted.glusterfs.shard.block-size=0x0000000020000000/ > /trusted.glusterfs.shard.file-size=0x0000000000100000000000000000000000000000000008000000000000000000/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: gluster/engine/brick/__DIRECT_IO_TEST__/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-1=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x000000000000000000000000/ > /trusted.gfid=0xf05b97422771484a85fc5b6974bcef81/ > /trusted.glusterfs.shard.block-size=0x0000000020000000/ > /trusted.glusterfs.shard.file-size=0x0000000000000000000000000000000000000000000000000000000000000000/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-1=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x000000010000000000000000/ > /trusted.bit-rot.version=0x0f0000000000000059647d5b000447e9/ > /trusted.gfid=0xe6dfd556340b4b76b47b7b6f5bd74327/ > /trusted.glusterfs.shard.block-size=0x0000000020000000/ > /trusted.glusterfs.shard.file-size=0x0000000000100000000000000000000000000000000008000000000000000000/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.64/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-1=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x0000000a0000000000000000/ > /trusted.bit-rot.version=0x090000000000000059647d5b000447e9/ > /trusted.gfid=0x9ef88647cfe64a35a38ca5173c9e8fc0/ > / > / > */ > /* > */NODE02:/* > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-0=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x0000001a0000000000000000/ > /trusted.bit-rot.version=0x08000000000000005965ede0000c352d/ > /trusted.gfid=0xe3565b5014954e5bae883bceca47b7d9/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48/ > /trusted.afr.dirty=0x000000010000000000000000/ > /trusted.afr.engine-client-0=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x0000000c0000000000000000/ > /trusted.bit-rot.version=0x08000000000000005965ede0000c352d/ > /trusted.gfid=0x676067891f344c1586b8c0d05b07f187/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-0=0x000000000000000000000000/ > /trusted.afr.engine-client-1=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x0000008e0000000000000000/ > /trusted.bit-rot.version=0x08000000000000005965ede0000c352d/ > /trusted.gfid=0x8aa745646740403ead51f56d9ca5d7a7/ > /trusted.glusterfs.shard.block-size=0x0000000020000000/ > /trusted.glusterfs.shard.file-size=0x0000000c8000000000000000000000000000000000d4f2290000000000000000/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-0=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x000000090000000000000000/ > /trusted.bit-rot.version=0x08000000000000005965ede0000c352d/ > /trusted.gfid=0x4e33ac33dddb4e29b4a351770b81166a/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-0=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x000000010000000000000000/ > /trusted.bit-rot.version=0x08000000000000005965ede0000c352d/ > /trusted.gfid=0x2581cb9ac2b74bd9ac17a09bd2f001b3/ > /trusted.glusterfs.shard.block-size=0x0000000020000000/ > /trusted.glusterfs.shard.file-size=0x0000000000100000000000000000000000000000000008000000000000000000/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: gluster/engine/brick/__DIRECT_IO_TEST__/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-0=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x000000000000000000000000/ > /trusted.gfid=0xf05b97422771484a85fc5b6974bcef81/ > /trusted.glusterfs.shard.block-size=0x0000000020000000/ > /trusted.glusterfs.shard.file-size=0x0000000000000000000000000000000000000000000000000000000000000000/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-0=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x000000020000000000000000/ > /trusted.bit-rot.version=0x08000000000000005965ede0000c352d/ > /trusted.gfid=0xe6dfd556340b4b76b47b7b6f5bd74327/ > /trusted.glusterfs.shard.block-size=0x0000000020000000/ > /trusted.glusterfs.shard.file-size=0x0000000000100000000000000000000000000000000008000000000000000000/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.64/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.afr.engine-client-0=0x000000000000000000000000/ > /trusted.afr.engine-client-2=0x000000120000000000000000/ > /trusted.bit-rot.version=0x08000000000000005965ede0000c352d/ > /trusted.gfid=0x9ef88647cfe64a35a38ca5173c9e8fc0/ > / > / > / > / > / > / > / > / > /*NODE04*:/ > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68/ > /security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000/ > /trusted.bit-rot.version=0x050000000000000059662c390006b836/ > /trusted.gfid=0xe3565b5014954e5bae883bceca47b7d9/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48/ > /security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000/ > /trusted.bit-rot.version=0x050000000000000059662c390006b836/ > /trusted.gfid=0x676067891f344c1586b8c0d05b07f187/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01/ > /security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000/ > /trusted.bit-rot.version=0x050000000000000059662c390006b836/ > /trusted.gfid=0x8aa745646740403ead51f56d9ca5d7a7/ > /trusted.glusterfs.shard.block-size=0x0000000020000000/ > /trusted.glusterfs.shard.file-size=0x0000000c8000000000000000000000000000000000d4f2290000000000000000/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60/ > /security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000/ > /trusted.bit-rot.version=0x050000000000000059662c390006b836/ > /trusted.gfid=0x4e33ac33dddb4e29b4a351770b81166a/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids/ > /security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.bit-rot.version=0x050000000000000059662c390006b836/ > /trusted.gfid=0x2581cb9ac2b74bd9ac17a09bd2f001b3/ > /trusted.glusterfs.shard.block-size=0x0000000020000000/ > /trusted.glusterfs.shard.file-size=0x0000000000100000000000000000000000000000000008000000000000000000/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: gluster/engine/brick/__DIRECT_IO_TEST__/ > /security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000/ > /trusted.bit-rot.version=0x0200000000000000596484e20006237b/ > /trusted.gfid=0xf05b97422771484a85fc5b6974bcef81/ > /trusted.glusterfs.shard.block-size=0x0000000020000000/ > /trusted.glusterfs.shard.file-size=0x0000000000000000000000000000000000000000000000000000000000000000/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6/ > /security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000/ > /trusted.afr.dirty=0x000000000000000000000000/ > /trusted.bit-rot.version=0x050000000000000059662c390006b836/ > /trusted.gfid=0xe6dfd556340b4b76b47b7b6f5bd74327/ > /trusted.glusterfs.shard.block-size=0x0000000020000000/ > /trusted.glusterfs.shard.file-size=0x0000000000100000000000000000000000000000000008000000000000000000/ > / > / > /getfattr: Removing leading '/' from absolute path names/ > /# file: > gluster/engine/brick/.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.64/ > /security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000/ > /trusted.bit-rot.version=0x050000000000000059662c390006b836/ > /trusted.gfid=0x9ef88647cfe64a35a38ca5173c9e8fc0/ > > > > > > hum.... Is selinux the problem? but on node04 was disabled (AFTER > GLUSTER JOIN, I hope to remember) ... You think I needs to relabel? how? > > /[root at node01 ~]# sestatus/ > /SELinux status: disabled/ > / > / > /[root at node02 ~]# sestatus/ > /SELinux status: disabled/ > / > / > /[root at node04 ~]# sestatus/ > /SELinux status: disabled/ > > > > Thank you > > Thanks, > Ravi > > >> 2. Are these 12 files also present in the 3rd data brick? >> >> >> I've checked right now: all files exists in all 3 nodes >> >> 3. Can you provide the output of `gluster volume info` for >> the this volume? >> >> >> >> /Volume Name: engine/ >> /Type: Replicate/ >> /Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515/ >> /Status: Started/ >> /Snapshot Count: 0/ >> /Number of Bricks: 1 x 3 = 3/ >> /Transport-type: tcp/ >> /Bricks:/ >> /Brick1: node01:/gluster/engine/brick/ >> /Brick2: node02:/gluster/engine/brick/ >> /Brick3: node04:/gluster/engine/brick/ >> /Options Reconfigured:/ >> /nfs.disable: on/ >> /performance.readdir-ahead: on/ >> /transport.address-family: inet/ >> /storage.owner-uid: 36/ >> /performance.quick-read: off/ >> /performance.read-ahead: off/ >> /performance.io-cache: off/ >> /performance.stat-prefetch: off/ >> /performance.low-prio-threads: 32/ >> /network.remote-dio: off/ >> /cluster.eager-lock: enable/ >> /cluster.quorum-type: auto/ >> /cluster.server-quorum-type: server/ >> /cluster.data-self-heal-algorithm: full/ >> /cluster.locking-scheme: granular/ >> /cluster.shd-max-threads: 8/ >> /cluster.shd-wait-qlength: 10000/ >> /features.shard: on/ >> /user.cifs: off/ >> /storage.owner-gid: 36/ >> /features.shard-block-size: 512MB/ >> /network.ping-timeout: 30/ >> /performance.strict-o-direct: on/ >> /cluster.granular-entry-heal: on/ >> /auth.allow: */ >> >> server.allow-insecure: on >> >> >> >> >>> Some extra info: >>> >>> We have recently changed the gluster from: 2 (full >>> repliacated) + 1 arbiter to 3 full replicated cluster >>> >> >> Just curious, how did you do this? `remove-brick` of arbiter >> brick followed by an `add-brick` to increase to replica-3? >> >> >> Yes >> >> >> #gluster volume remove-brick engine replica 2 >> node03:/gluster/data/brick force *(OK!)* >> >> #gluster volume heal engine info *(no entries!)* >> >> #gluster volume add-brick engine replica 3 >> node04:/gluster/engine/brick *(OK!)* >> >> *After some minutes* >> >> [root at node01 ~]# gluster volume heal engine info >> Brick node01:/gluster/engine/brick >> Status: Connected >> Number of entries: 0 >> >> Brick node02:/gluster/engine/brick >> Status: Connected >> Number of entries: 0 >> >> Brick node04:/gluster/engine/brick >> Status: Connected >> Number of entries: 0 >> >> Thanks, >> Ravi >> >> >> Another extra info (I don't know if this can be the problem): >> Five days ago A black out has suddenly shut down the networks >> switch (also gluster network) of node 03 and 04 ... But I don't >> know this problem is in place after this black out >> >> Thank you! >> > > > > > -- > Linux User: 369739 http://counter.li.org-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170720/d263e241/attachment.html>
yayo (j)
2017-Jul-21 09:25 UTC
[Gluster-users] [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:> > But it does say something. All these gfids of completed heals in the log > below are the for the ones that you have given the getfattr output of. So > what is likely happening is there is an intermittent connection problem > between your mount and the brick process, leading to pending heals again > after the heal gets completed, which is why the numbers are varying each > time. You would need to check why that is the case. > Hope this helps, > Ravi > > > > *[2017-07-20 09:58:46.573079] I [MSGID: 108026] > [afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0: > Completed data selfheal on e6dfd556-340b-4b76-b47b-7b6f5bd74327. > sources=[0] 1 sinks=2* > *[2017-07-20 09:59:22.995003] I [MSGID: 108026] > [afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do] > 0-engine-replicate-0: performing metadata selfheal on > f05b9742-2771-484a-85fc-5b6974bcef81* > *[2017-07-20 09:59:22.999372] I [MSGID: 108026] > [afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0: > Completed metadata selfheal on f05b9742-2771-484a-85fc-5b6974bcef81. > sources=[0] 1 sinks=2* > >Hi, But we ha1e 2 gluster volume on the same network and the other one (the "Data" gluster) don't have any problems. Why you think there is a network problem? How to check this on a gluster infrastructure? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170721/a0b00d1a/attachment.html>
yayo (j)
2017-Jul-21 17:13 UTC
[Gluster-users] [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:> > But it does say something. All these gfids of completed heals in the log > below are the for the ones that you have given the getfattr output of. So > what is likely happening is there is an intermittent connection problem > between your mount and the brick process, leading to pending heals again > after the heal gets completed, which is why the numbers are varying each > time. You would need to check why that is the case. > Hope this helps, > Ravi > > > > *[2017-07-20 09:58:46.573079] I [MSGID: 108026] > [afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0: > Completed data selfheal on e6dfd556-340b-4b76-b47b-7b6f5bd74327. > sources=[0] 1 sinks=2* > *[2017-07-20 09:59:22.995003] I [MSGID: 108026] > [afr-self-heal-metadata.c:51:__afr_selfheal_metadata_do] > 0-engine-replicate-0: performing metadata selfheal on > f05b9742-2771-484a-85fc-5b6974bcef81* > *[2017-07-20 09:59:22.999372] I [MSGID: 108026] > [afr-self-heal-common.c:1254:afr_log_selfheal] 0-engine-replicate-0: > Completed metadata selfheal on f05b9742-2771-484a-85fc-5b6974bcef81. > sources=[0] 1 sinks=2* > >Hi, following your suggestion, I've checked the "peer" status and I found that there is too many name for the hosts, I don't know if this can be the problem or part of it: *gluster peer status on NODE01:* *Number of Peers: 2* *Hostname: dnode02.localdomain.local* *Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd* *State: Peer in Cluster (Connected)* *Other names:* *192.168.10.52* *dnode02.localdomain.local* *10.10.20.90* *10.10.10.20* *gluster peer status on NODE02:* *Number of Peers: 2* *Hostname: dnode01.localdomain.local* *Uuid: a568bd60-b3e4-4432-a9bc-996c52eaaa12* *State: Peer in Cluster (Connected)* *Other names:* *gdnode01* *10.10.10.10* *Hostname: gdnode04* *Uuid: ce6e0f6b-12cf-4e40-8f01-d1609dfc5828* *State: Peer in Cluster (Connected)* *Other names:* *192.168.10.54* *10.10.10.40* *gluster peer status on NODE04:* *Number of Peers: 2* *Hostname: dnode02.neridom.dom* *Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd* *State: Peer in Cluster (Connected)* *Other names:* *10.10.20.90* *gdnode02* *192.168.10.52* *10.10.10.20* *Hostname: dnode01.localdomain.local* *Uuid: a568bd60-b3e4-4432-a9bc-996c52eaaa12* *State: Peer in Cluster (Connected)* *Other names:* *gdnode01* *10.10.10.10* All these ip are pingable and hosts resolvible across all 3 nodes but, only the 10.10.10.0 network is the decidated network for gluster (rosolved using gdnode* host names) ... You think that remove other entries can fix the problem? So, sorry, but, how can I remove other entries? And, what about the selinux? Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170721/8b78706b/attachment.html>
Apparently Analagous Threads
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements