search for: 4ea1

Displaying 7 results from an estimated 7 matches for "4ea1".

Did you mean: 4aa1
2017 Jul 24
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...etwork found in cluster '00000002-0002-0002-0002-00000000017a' 2017-07-24 15:54:02,218+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4] Could not associate brick 'gdnode01:/gluster/data/brick' of volume 'c7a5dfc9-3e72-4ea1-843e- c8275d4a7c2d' with correct network as no gluster network found in cluster '00000002-0002-0002-0002-00000000017a' 2017-07-24 15:54:02,221+02 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4] Could not associate brick 'gd...
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...r '00000002-0002-0002-0002-00000000017a' > 2017-07-24 15:54:02,218+02 WARN [org.ovirt.engine.core.vdsbro > ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4] > Could not associate brick 'gdnode01:/gluster/data/brick' of volume > 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no gluster > network found in cluster '00000002-0002-0002-0002-00000000017a' > 2017-07-24 15:54:02,221+02 WARN [org.ovirt.engine.core.vdsbro > ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4] > Could not as...
2017 Jul 24
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...> > > https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ? > usp=sharing > > But the "gluster volume info" command report that all 2 volume are full > replicated: > > > *Volume Name: data* > *Type: Replicate* > *Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d* > *Status: Started* > *Snapshot Count: 0* > *Number of Bricks: 1 x 3 = 3* > *Transport-type: tcp* > *Bricks:* > *Brick1: gdnode01:/gluster/data/brick* > *Brick2: gdnode02:/gluster/data/brick* > *Brick3: gdnode04:/gluster/data/brick* > *Options Reconfigu...
2017 Jul 22
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...gt; > https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ?usp=sharing > > But the "gluster volume info" command report that all 2 volume are > full replicated: > > > /Volume Name: data/ > /Type: Replicate/ > /Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d/ > /Status: Started/ > /Snapshot Count: 0/ > /Number of Bricks: 1 x 3 = 3/ > /Transport-type: tcp/ > /Bricks:/ > /Brick1: gdnode01:/gluster/data/brick/ > /Brick2: gdnode02:/gluster/data/brick/ > /Brick3: gdnode04:/gluster/dat...
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...olume as full replicated volume. Check these screenshots: https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ?usp=sharing But the "gluster volume info" command report that all 2 volume are full replicated: *Volume Name: data* *Type: Replicate* *Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d* *Status: Started* *Snapshot Count: 0* *Number of Bricks: 1 x 3 = 3* *Transport-type: tcp* *Bricks:* *Brick1: gdnode01:/gluster/data/brick* *Brick2: gdnode02:/gluster/data/brick* *Brick3: gdnode04:/gluster/data/brick* *Options Reconfigured:* *nfs.disable: on* *performance.readdir-...
2017 Jul 21
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: > > But it does say something. All these gfids of completed heals in the log > below are the for the ones that you have given the getfattr output of. So > what is likely happening is there is an intermittent connection problem > between your mount and the brick process, leading to pending heals again >
2024 May 07
1
[PATCH v2 01/12] drm/amdgpu, drm/radeon: Make I2C terminology more inclusive
...avid, others, Could you re-review v2 since the feedback provided in v0 [1] has now been addressed? I can send v3 with all other feedback and signoffs from the other maintainers incorporated when I have something for amdgpu and radeon. Thanks, Easwar [1] https://lore.kernel.org/all/53f3afba-4759-4ea1-b408-8a929b26280c at amd.com/