Displaying 9 results from an estimated 9 matches for "843e".
Did you mean:
8439
2017 Jul 24
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...k found in cluster
'00000002-0002-0002-0002-00000000017a'
2017-07-24 15:54:02,218+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode01:/gluster/data/brick' of volume 'c7a5dfc9-3e72-4ea1-843e-
c8275d4a7c2d' with correct network as no gluster network found in cluster
'00000002-0002-0002-0002-00000000017a'
2017-07-24 15:54:02,221+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode0...
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...9;00000002-0002-0002-0002-00000000017a'
> 2017-07-24 15:54:02,218+02 WARN [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode01:/gluster/data/brick' of volume
> 'c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d' with correct network as no gluster
> network found in cluster '00000002-0002-0002-0002-00000000017a'
> 2017-07-24 15:54:02,221+02 WARN [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associa...
2017 Jul 24
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...>
> https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ?
> usp=sharing
>
> But the "gluster volume info" command report that all 2 volume are full
> replicated:
>
>
> *Volume Name: data*
> *Type: Replicate*
> *Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d*
> *Status: Started*
> *Snapshot Count: 0*
> *Number of Bricks: 1 x 3 = 3*
> *Transport-type: tcp*
> *Bricks:*
> *Brick1: gdnode01:/gluster/data/brick*
> *Brick2: gdnode02:/gluster/data/brick*
> *Brick3: gdnode04:/gluster/data/brick*
> *Options Reconfigured:*...
2017 Jul 22
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...gt; https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ?usp=sharing
>
> But the "gluster volume info" command report that all 2 volume are
> full replicated:
>
>
> /Volume Name: data/
> /Type: Replicate/
> /Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d/
> /Status: Started/
> /Snapshot Count: 0/
> /Number of Bricks: 1 x 3 = 3/
> /Transport-type: tcp/
> /Bricks:/
> /Brick1: gdnode01:/gluster/data/brick/
> /Brick2: gdnode02:/gluster/data/brick/
> /Brick3: gdnode04:/gluster/data/bri...
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...as full replicated volume. Check these screenshots:
https://drive.google.com/drive/folders/0ByUV7xQtP1gCTE8tUTFfVmR5aDQ?usp=sharing
But the "gluster volume info" command report that all 2 volume are full
replicated:
*Volume Name: data*
*Type: Replicate*
*Volume ID: c7a5dfc9-3e72-4ea1-843e-c8275d4a7c2d*
*Status: Started*
*Snapshot Count: 0*
*Number of Bricks: 1 x 3 = 3*
*Transport-type: tcp*
*Bricks:*
*Brick1: gdnode01:/gluster/data/brick*
*Brick2: gdnode02:/gluster/data/brick*
*Brick3: gdnode04:/gluster/data/brick*
*Options Reconfigured:*
*nfs.disable: on*
*performance.readdir-ahead...
2017 Jul 21
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
>
> But it does say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
> between your mount and the brick process, leading to pending heals again
>
2019 Aug 30
2
backup AD content
...arding older DRS linked attribute update to member on CN=Domain
Admins,CN=Users,DC=arbeitsgruppe,DC=mydomain,DC=at from
c93692f6-4a12-41c3-b622-9593650dd565
Discarding older DRS linked attribute update to member on CN=Domain
Admins,CN=Users,DC=arbeitsgruppe,DC=mydomain,DC=at from
dc732e2f-383e-4241-843e-d071d15caf41
Discarding older DRS linked attribute update to member on CN=Domain
Admins,CN=Users,DC=arbeitsgruppe,DC=mydomain,DC=at from
c93692f6-4a12-41c3-b622-9593650dd565
Discarding older DRS linked attribute update to member on CN=Domain
Admins,CN=Users,DC=arbeitsgruppe,DC=mydomain,DC=at from
c...
2019 Aug 30
5
backup AD content
I happily and trustfully use Louis' backup-script from
https://github.com/thctlo/samba4
to dump AD content via cronjob.
Is it necessary/recommended to do that on *each* samba DC? Is there
something server-specific in the dump(s) or is it enough to do that once
per domain?
thanks ...
2010 May 05
0
R-help Digest, Vol 87, Issue 5
..., 4 May 2010 10:31:46 -0400
From: David Winsemius <dwinsemius at comcast.net>
To: Jorge Ivan Velez <jorgeivanvelez at gmail.com>
Cc: r-help at r-project.org, someone <vonhoffen at t-online.de>
Subject: Re: [R] Show number at each bar in barchart?
Message-ID: <982FA4DD-6CC1-469B-843E-9A28D9CD3E4C at comcast.net>
Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes
On May 4, 2010, at 9:40 AM, Jorge Ivan Velez wrote:
> Hi someone,
>
> Try this:
>
> x <- c(20, 80, 20, 5, 2)
> b <- barplot(x, ylim = c(0, 85), las = 1)
> text(b, x+2,...