Sahina Bose
2017-Jul-19 14:32 UTC
[Gluster-users] [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
[Adding gluster-users] On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com> wrote:> Hi all, > > We have an ovirt cluster hyperconverged with hosted engine on 3 full > replicated node . This cluster have 2 gluster volume: > > - data: volume for the Data (Master) Domain (For vm) > - engine: volume fro the hosted_storage Domain (for hosted engine) > > We have this problem: "engine" gluster volume have always unsynced > elements and we cant' fix the problem, on command line we have tried to use > the "heal" command but elements remain always unsynced .... > > Below the heal command "status": > > [root at node01 ~]# gluster volume heal engine info > Brick node01:/gluster/engine/brick > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48 > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.64 > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60 > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.2 > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68 > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267- > 52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01 > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053- > a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6 > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.61 > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.1 > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.20 > /__DIRECT_IO_TEST__ > Status: Connected > Number of entries: 12 > > Brick node02:/gluster/engine/brick > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267- > 52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01 > <gfid:9a601373-bbaa-44d8-b396-f0b9b12c026f> > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids > <gfid:1e309376-c62e-424f-9857-f9a0c3a729bf> > <gfid:e3565b50-1495-4e5b-ae88-3bceca47b7d9> > <gfid:4e33ac33-dddb-4e29-b4a3-51770b81166a> > /__DIRECT_IO_TEST__ > <gfid:67606789-1f34-4c15-86b8-c0d05b07f187> > <gfid:9ef88647-cfe6-4a35-a38c-a5173c9e8fc0> > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053- > a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6 > <gfid:9ad720b2-507d-4830-8294-ec8adee6d384> > <gfid:d9853e5d-a2bf-4cee-8b39-7781a98033cf> > Status: Connected > Number of entries: 12 > > Brick node04:/gluster/engine/brick > Status: Connected > Number of entries: 0 > > > > running the "gluster volume heal engine" don't solve the problem... > > Some extra info: > > We have recently changed the gluster from: 2 (full repliacated) + 1 > arbiter to 3 full replicated cluster but i don't know this is the problem... > > The "data" volume is good and healty and have no unsynced entry. > > Ovirt refuse to put the node02 and node01 in "maintenance mode" and > complains about "unsynced elements" > > How can I fix this? > Thank you > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170719/6b73e70a/attachment.html>
Ravishankar N
2017-Jul-19 14:55 UTC
[Gluster-users] [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/19/2017 08:02 PM, Sahina Bose wrote:> [Adding gluster-users] > > On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com > <mailto:jaganz at gmail.com>> wrote: > > Hi all, > > We have an ovirt cluster hyperconverged with hosted engine on 3 > full replicated node . This cluster have 2 gluster volume: > > - data: volume for the Data (Master) Domain (For vm) > - engine: volume fro the hosted_storage Domain (for hosted engine) > > We have this problem: "engine" gluster volume have always unsynced > elements and we cant' fix the problem, on command line we have > tried to use the "heal" command but elements remain always > unsynced .... > > Below the heal command "status": > > [root at node01 ~]# gluster volume heal engine info > Brick node01:/gluster/engine/brick > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.48 > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.64 > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.60 > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.2 > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.68 > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01 > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6 > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.61 > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.1 > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids > /.shard/8aa74564-6740-403e-ad51-f56d9ca5d7a7.20 > /__DIRECT_IO_TEST__ > Status: Connected > Number of entries: 12 > > Brick node02:/gluster/engine/brick > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/19d71267-52a4-42a3-bb1e-e3145361c0c2/7a215635-02f3-47db-80db-8b689c6a8f01 > <gfid:9a601373-bbaa-44d8-b396-f0b9b12c026f> > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/dom_md/ids > <gfid:1e309376-c62e-424f-9857-f9a0c3a729bf> > <gfid:e3565b50-1495-4e5b-ae88-3bceca47b7d9> > <gfid:4e33ac33-dddb-4e29-b4a3-51770b81166a> > /__DIRECT_IO_TEST__ > <gfid:67606789-1f34-4c15-86b8-c0d05b07f187> > <gfid:9ef88647-cfe6-4a35-a38c-a5173c9e8fc0> > /8f215dd2-8531-4a4f-b6ed-ea789dd8821b/images/88d41053-a257-4272-9e2e-2f3de0743b81/6573ed08-d3ed-4d12-9227-2c95941e1ad6 > <gfid:9ad720b2-507d-4830-8294-ec8adee6d384> > <gfid:d9853e5d-a2bf-4cee-8b39-7781a98033cf> > Status: Connected > Number of entries: 12 > > Brick node04:/gluster/engine/brick > Status: Connected > Number of entries: 0 > > > running the "gluster volume heal engine" don't solve the problem... >1. What does the glustershd.log say on all 3 nodes when you run the command? Does it complain anything about these files? 2. Are these 12 files also present in the 3rd data brick? 3. Can you provide the output of `gluster volume info` for the this volume?> > Some extra info: > > We have recently changed the gluster from: 2 (full repliacated) + > 1 arbiter to 3 full replicated cluster >Just curious, how did you do this? `remove-brick` of arbiter brick followed by an `add-brick` to increase to replica-3? Thanks, Ravi> > but i don't know this is the problem... > > The "data" volume is good and healty and have no unsynced entry. > > Ovirt refuse to put the node02 and node01 in "maintenance mode" > and complains about "unsynced elements" > > How can I fix this? > Thank you > > _______________________________________________ > Users mailing list > Users at ovirt.org <mailto:Users at ovirt.org> > http://lists.ovirt.org/mailman/listinfo/users > <http://lists.ovirt.org/mailman/listinfo/users> > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170719/f7453a7b/attachment.html>
yayo (j)
2017-Jul-20 08:50 UTC
[Gluster-users] [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
Hi, Thank you for the answer and sorry for delay: 2017-07-19 16:55 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: 1. What does the glustershd.log say on all 3 nodes when you run the> command? Does it complain anything about these files? >No, glustershd.log is clean, no extra log after command on all 3 nodes> 2. Are these 12 files also present in the 3rd data brick? >I've checked right now: all files exists in all 3 nodes> 3. Can you provide the output of `gluster volume info` for the this volume? >*Volume Name: engine* *Type: Replicate* *Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515* *Status: Started* *Snapshot Count: 0* *Number of Bricks: 1 x 3 = 3* *Transport-type: tcp* *Bricks:* *Brick1: node01:/gluster/engine/brick* *Brick2: node02:/gluster/engine/brick* *Brick3: node04:/gluster/engine/brick* *Options Reconfigured:* *nfs.disable: on* *performance.readdir-ahead: on* *transport.address-family: inet* *storage.owner-uid: 36* *performance.quick-read: off* *performance.read-ahead: off* *performance.io-cache: off* *performance.stat-prefetch: off* *performance.low-prio-threads: 32* *network.remote-dio: off* *cluster.eager-lock: enable* *cluster.quorum-type: auto* *cluster.server-quorum-type: server* *cluster.data-self-heal-algorithm: full* *cluster.locking-scheme: granular* *cluster.shd-max-threads: 8* *cluster.shd-wait-qlength: 10000* *features.shard: on* *user.cifs: off* *storage.owner-gid: 36* *features.shard-block-size: 512MB* *network.ping-timeout: 30* *performance.strict-o-direct: on* *cluster.granular-entry-heal: on* *auth.allow: ** server.allow-insecure: on> > Some extra info: >> >> We have recently changed the gluster from: 2 (full repliacated) + 1 >> arbiter to 3 full replicated cluster >> > > Just curious, how did you do this? `remove-brick` of arbiter brick > followed by an `add-brick` to increase to replica-3? > >Yes #gluster volume remove-brick engine replica 2 node03:/gluster/data/brick force *(OK!)* #gluster volume heal engine info *(no entries!)* #gluster volume add-brick engine replica 3 node04:/gluster/engine/brick *(OK!)* *After some minutes* [root at node01 ~]# gluster volume heal engine info Brick node01:/gluster/engine/brick Status: Connected Number of entries: 0 Brick node02:/gluster/engine/brick Status: Connected Number of entries: 0 Brick node04:/gluster/engine/brick Status: Connected Number of entries: 0> Thanks, > Ravi >Another extra info (I don't know if this can be the problem): Five days ago A black out has suddenly shut down the networks switch (also gluster network) of node 03 and 04 ... But I don't know this problem is in place after this black out Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170720/74878e11/attachment.html>
Apparently Analagous Threads
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
- [ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements