Displaying 6 results from an estimated 6 matches for "c7a5dfc9".
2017 Jul 24
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...s no gluster network found in cluster
'00000002-0002-0002-0002-00000000017a'
2017-07-24 15:54:02,218+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode01:/gluster/data/brick' of volume 'c7a5dfc9-3e72-4ea1-843e-
c8275d4a7c2d' with correct network as no gluster network found in cluster
'00000002-0002-0002-0002-00000000017a'
2017-07-24 15:54:02,221+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate bri...
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ound in cluster '00000002-0002-0002-0002-00000000017a'
> 2017-07-24 15:54:02,218+02 WARN [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode01:/gluster/data/brick' of volume
> 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no gluster
> network found in cluster '00000002-0002-0002-0002-00000000017a'
> 2017-07-24 15:54:02,221+02 WARN [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Co...
2017 Jul 24
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...tell
> you.
>
>
> https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ?
> usp=sharing
>
> But the "gluster volume info" command report that all 2 volume are full
> replicated:
>
>
> *Volume Name: data*
> *Type: Replicate*
> *Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d*
> *Status: Started*
> *Snapshot Count: 0*
> *Number of Bricks: 1 x 3 = 3*
> *Transport-type: tcp*
> *Bricks:*
> *Brick1: gdnode01:/gluster/data/brick*
> *Brick2: gdnode02:/gluster/data/brick*
> *Brick3: gdnode04:/gluster/data/brick*
> *Options...
2017 Jul 22
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...o
tell you.
>
> https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ?usp=sharing
>
> But the "gluster volume info" command report that all 2 volume are
> full replicated:
>
>
> /Volume Name: data/
> /Type: Replicate/
> /Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d/
> /Status: Started/
> /Snapshot Count: 0/
> /Number of Bricks: 1 x 3 = 3/
> /Transport-type: tcp/
> /Bricks:/
> /Brick1: gdnode01:/gluster/data/brick/
> /Brick2: gdnode02:/gluster/data/brick/
> /Brick3: gdnode04:/g...
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...t;data" volume as full replicated volume. Check these screenshots:
https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ?usp=sharing
But the "gluster volume info" command report that all 2 volume are full
replicated:
*Volume Name: data*
*Type: Replicate*
*Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d*
*Status: Started*
*Snapshot Count: 0*
*Number of Bricks: 1 x 3 = 3*
*Transport-type: tcp*
*Bricks:*
*Brick1: gdnode01:/gluster/data/brick*
*Brick2: gdnode02:/gluster/data/brick*
*Brick3: gdnode04:/gluster/data/brick*
*Options Reconfigured:*
*nfs.disable: on*
*performanc...
2017 Jul 21
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
>
> But it does say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
> between your mount and the brick process, leading to pending heals again
>