Displaying 6 results from an estimated 6 matches for "3e72".
Did you mean:
372
2017 Jul 24
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ter network found in cluster
'00000002-0002-0002-0002-00000000017a'
2017-07-24 15:54:02,218+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode01:/gluster/data/brick' of volume 'c7a5dfc9-3e72-4ea1-843e-
c8275d4a7c2d' with correct network as no gluster network found in cluster
'00000002-0002-0002-0002-00000000017a'
2017-07-24 15:54:02,221+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
...
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...luster '00000002-0002-0002-0002-00000000017a'
> 2017-07-24 15:54:02,218+02 WARN [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode01:/gluster/data/brick' of volume
> 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no gluster
> network found in cluster '00000002-0002-0002-0002-00000000017a'
> 2017-07-24 15:54:02,221+02 WARN [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could n...
2017 Jul 24
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...you.
>
>
> https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ?
> usp=sharing
>
> But the "gluster volume info" command report that all 2 volume are full
> replicated:
>
>
> *Volume Name: data*
> *Type: Replicate*
> *Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d*
> *Status: Started*
> *Snapshot Count: 0*
> *Number of Bricks: 1 x 3 = 3*
> *Transport-type: tcp*
> *Bricks:*
> *Brick1: gdnode01:/gluster/data/brick*
> *Brick2: gdnode02:/gluster/data/brick*
> *Brick3: gdnode04:/gluster/data/brick*
> *Options Reco...
2017 Jul 22
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ou.
>
> https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ?usp=sharing
>
> But the "gluster volume info" command report that all 2 volume are
> full replicated:
>
>
> /Volume Name: data/
> /Type: Replicate/
> /Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d/
> /Status: Started/
> /Snapshot Count: 0/
> /Number of Bricks: 1 x 3 = 3/
> /Transport-type: tcp/
> /Bricks:/
> /Brick1: gdnode01:/gluster/data/brick/
> /Brick2: gdnode02:/gluster/data/brick/
> /Brick3: gdnode04:/gluste...
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ot; volume as full replicated volume. Check these screenshots:
https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ?usp=sharing
But the "gluster volume info" command report that all 2 volume are full
replicated:
*Volume Name: data*
*Type: Replicate*
*Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d*
*Status: Started*
*Snapshot Count: 0*
*Number of Bricks: 1 x 3 = 3*
*Transport-type: tcp*
*Bricks:*
*Brick1: gdnode01:/gluster/data/brick*
*Brick2: gdnode02:/gluster/data/brick*
*Brick3: gdnode04:/gluster/data/brick*
*Options Reconfigured:*
*nfs.disable: on*
*performance.rea...
2017 Jul 21
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
>
> But it does say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
> between your mount and the brick process, leading to pending heals again
>