search for: gdnode02

Displaying 8 results from an estimated 8 matches for "gdnode02".

Did you mean: gdnode01
2017 Jul 24
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...7b-8ba7- 4f2a23d17515' with correct network as no gluster network found in cluster '00000002-0002-0002-0002-00000000017a' 2017-07-24 15:54:02,212+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4] Could not associate brick 'gdnode02:/gluster/engine/brick' of volume 'd19c19e3-910d-437b-8ba7- 4f2a23d17515' with correct network as no gluster network found in cluster '00000002-0002-0002-0002-00000000017a' 2017-07-24 15:54:02,215+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (Default...
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...#39; with correct network as no gluster > network found in cluster '00000002-0002-0002-0002-00000000017a' > 2017-07-24 15:54:02,212+02 WARN [org.ovirt.engine.core.vdsbro > ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4] > Could not associate brick 'gdnode02:/gluster/engine/brick' of volume > 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster > network found in cluster '00000002-0002-0002-0002-00000000017a' > 2017-07-24 15:54:02,215+02 WARN [org.ovirt.engine.core.vdsbro > ker.gluster.GlusterVolume...
2017 Jul 24
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ed: > > > *Volume Name: data* > *Type: Replicate* > *Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d* > *Status: Started* > *Snapshot Count: 0* > *Number of Bricks: 1 x 3 = 3* > *Transport-type: tcp* > *Bricks:* > *Brick1: gdnode01:/gluster/data/brick* > *Brick2: gdnode02:/gluster/data/brick* > *Brick3: gdnode04:/gluster/data/brick* > *Options Reconfigured:* > *nfs.disable: on* > *performance.readdir-ahead: on* > *transport.address-family: inet* > *storage.owner-uid: 36* > *performance.quick-read: off* > *performance.read-ahead: off* > *pe...
2017 Jul 21
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...01-d1609dfc5828* *State: Peer in Cluster (Connected)* *Other names:* *192.168.10.54* *10.10.10.40* *gluster peer status on NODE04:* *Number of Peers: 2* *Hostname: dnode02.neridom.dom* *Uuid: 7c0ebfa3-5676-4d3f-9bfa-7fff6afea0dd* *State: Peer in Cluster (Connected)* *Other names:* *10.10.20.90* *gdnode02* *192.168.10.52* *10.10.10.20* *Hostname: dnode01.localdomain.local* *Uuid: a568bd60-b3e4-4432-a9bc-996c52eaaa12* *State: Peer in Cluster (Connected)* *Other names:* *gdnode01* *10.10.10.10* All these ip are pingable and hosts resolvible across all 3 nodes but, only the 10.10.10.0 network is th...
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...o" command report that all 2 volume are full replicated: *Volume Name: data* *Type: Replicate* *Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d* *Status: Started* *Snapshot Count: 0* *Number of Bricks: 1 x 3 = 3* *Transport-type: tcp* *Bricks:* *Brick1: gdnode01:/gluster/data/brick* *Brick2: gdnode02:/gluster/data/brick* *Brick3: gdnode04:/gluster/data/brick* *Options Reconfigured:* *nfs.disable: on* *performance.readdir-ahead: on* *transport.address-family: inet* *storage.owner-uid: 36* *performance.quick-read: off* *performance.read-ahead: off* *performance.io-cache: off* *performance.stat-pr...
2017 Jul 22
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...a/ > /Type: Replicate/ > /Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d/ > /Status: Started/ > /Snapshot Count: 0/ > /Number of Bricks: 1 x 3 = 3/ > /Transport-type: tcp/ > /Bricks:/ > /Brick1: gdnode01:/gluster/data/brick/ > /Brick2: gdnode02:/gluster/data/brick/ > /Brick3: gdnode04:/gluster/data/brick/ > /Options Reconfigured:/ > /nfs.disable: on/ > /performance.readdir-ahead: on/ > /transport.address-family: inet/ > /storage.owner-uid: 36/ > /performance.quick-read: off/ > /perfo...
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 03:42 PM, yayo (j) wrote: > > 2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > > Could you check if the self-heal daemon on all nodes is connected > to the 3 bricks? You will need to check the glustershd.log for that. > If it is not connected, try restarting the shd using
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: > > Could you check if the self-heal daemon on all nodes is connected to the 3 > bricks? You will need to check the glustershd.log for that. > If it is not connected, try restarting the shd using `gluster volume start > engine force`, then launch the heal command like you did earlier and see if > heals