search for: gdnode01

Displaying 10 results from an estimated 10 matches for "gdnode01".

2017 Jul 24
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
..., GlusterVolumesListVDSParameters:{runAsync='true', hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 7fce25d3 2017-07-24 15:54:02,209+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4] Could not associate brick 'gdnode01:/gluster/engine/brick' of volume 'd19c19e3-910d-437b-8ba7- 4f2a23d17515' with correct network as no gluster network found in cluster '00000002-0002-0002-0002-00000000017a' 2017-07-24 15:54:02,212+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (Default...
2017 Jul 21
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ster (Connected)* *Other names:* *192.168.10.52* *dnode02.localdomain.local* *10.10.20.90* *10.10.10.20* *gluster peer status on NODE02:* *Number of Peers: 2* *Hostname: dnode01.localdomain.local* *Uuid: a568bd60-b3e4-4432-a9bc-996c52eaaa12* *State: Peer in Cluster (Connected)* *Other names:* *gdnode01* *10.10.10.10* *Hostname: gdnode04* *Uuid: ce6e0f6b-12cf-4e40-8f01-d1609dfc5828* *State: Peer in Cluster (Connected)* *Other names:* *192.168.10.54* *10.10.10.40* *gluster peer status on NODE04:* *Number of Peers: 2* *Hostname: dnode02.neridom.dom* *Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd* *...
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...SParameters:{runAsync='true', > hostId='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 7fce25d3 > 2017-07-24 15:54:02,209+02 WARN [org.ovirt.engine.core.vdsbro > ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4] > Could not associate brick 'gdnode01:/gluster/engine/brick' of volume > 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster > network found in cluster '00000002-0002-0002-0002-00000000017a' > 2017-07-24 15:54:02,212+02 WARN [org.ovirt.engine.core.vdsbro > ker.gluster.GlusterVolume...
2017 Jul 24
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ort that all 2 volume are full > replicated: > > > *Volume Name: data* > *Type: Replicate* > *Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d* > *Status: Started* > *Snapshot Count: 0* > *Number of Bricks: 1 x 3 = 3* > *Transport-type: tcp* > *Bricks:* > *Brick1: gdnode01:/gluster/data/brick* > *Brick2: gdnode02:/gluster/data/brick* > *Brick3: gdnode04:/gluster/data/brick* > *Options Reconfigured:* > *nfs.disable: on* > *performance.readdir-ahead: on* > *transport.address-family: inet* > *storage.owner-uid: 36* > *performance.quick-read: off*...
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...aring But the "gluster volume info" command report that all 2 volume are full replicated: *Volume Name: data* *Type: Replicate* *Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d* *Status: Started* *Snapshot Count: 0* *Number of Bricks: 1 x 3 = 3* *Transport-type: tcp* *Bricks:* *Brick1: gdnode01:/gluster/data/brick* *Brick2: gdnode02:/gluster/data/brick* *Brick3: gdnode04:/gluster/data/brick* *Options Reconfigured:* *nfs.disable: on* *performance.readdir-ahead: on* *transport.address-family: inet* *storage.owner-uid: 36* *performance.quick-read: off* *performance.read-ahead: off* *performa...
2017 Jul 22
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...replicated: > > > /Volume Name: data/ > /Type: Replicate/ > /Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d/ > /Status: Started/ > /Snapshot Count: 0/ > /Number of Bricks: 1 x 3 = 3/ > /Transport-type: tcp/ > /Bricks:/ > /Brick1: gdnode01:/gluster/data/brick/ > /Brick2: gdnode02:/gluster/data/brick/ > /Brick3: gdnode04:/gluster/data/brick/ > /Options Reconfigured:/ > /nfs.disable: on/ > /performance.readdir-ahead: on/ > /transport.address-family: inet/ > /storage.owner-uid: 36/ >...
2017 Jul 25
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ors should go away. This has > nothing to do with the problem you are seeing. > Hi, You talking about errors like these? 2017-07-24 15:54:02,209+02 WARN [org.ovirt.engine.core.vdsbro ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4] Could not associate brick 'gdnode01:/gluster/engine/brick' of volume 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster network found in cluster '00000002-0002-0002-0002-00000000017a' How to assign "glusternw (???)" to the correct interface? Other errors on unsync gluster elemen...
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...g to do with the problem you are seeing. >> > > Hi, > > You talking about errors like these? > > 2017-07-24 15:54:02,209+02 WARN [org.ovirt.engine.core.vdsbro > ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4] > Could not associate brick 'gdnode01:/gluster/engine/brick' of volume > 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster > network found in cluster '00000002-0002-0002-0002-00000000017a' > > > How to assign "glusternw (???)" to the correct interface? > https://w...
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 03:42 PM, yayo (j) wrote: > > 2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > > Could you check if the self-heal daemon on all nodes is connected > to the 3 bricks? You will need to check the glustershd.log for that. > If it is not connected, try restarting the shd using
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: > > Could you check if the self-heal daemon on all nodes is connected to the 3 > bricks? You will need to check the glustershd.log for that. > If it is not connected, try restarting the shd using `gluster volume start > engine force`, then launch the heal command like you did earlier and see if > heals