Displaying 20 results from an estimated 21 matches for "8ba7".
2017 Jul 24
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...9;4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 7fce25d3
2017-07-24 15:54:02,209+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode01:/gluster/engine/brick' of volume 'd19c19e3-910d-437b-8ba7-
4f2a23d17515' with correct network as no gluster network found in cluster
'00000002-0002-0002-0002-00000000017a'
2017-07-24 15:54:02,212+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode0...
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...4b3-af332247570c'}), log id: 7fce25d3
> 2017-07-24 15:54:02,209+02 WARN [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode01:/gluster/engine/brick' of volume
> 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster
> network found in cluster '00000002-0002-0002-0002-00000000017a'
> 2017-07-24 15:54:02,212+02 WARN [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associa...
2017 Jul 24
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...shard-block-size: 512MB*
> *network.ping-timeout: 30*
> *performance.strict-o-direct: on*
> *cluster.granular-entry-heal: on*
> *auth.allow: **
> *server.allow-insecure: on*
>
>
>
>
>
> *Volume Name: engine*
> *Type: Replicate*
> *Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515*
> *Status: Started*
> *Snapshot Count: 0*
> *Number of Bricks: 1 x 3 = 3*
> *Transport-type: tcp*
> *Bricks:*
> *Brick1: gdnode01:/gluster/engine/brick*
> *Brick2: gdnode02:/gluster/engine/brick*
> *Brick3: gdnode04:/gluster/engine/brick*
> *Options Reconfig...
2023 Apr 03
1
WARNING: no target object found for GUID component link lastKnownParent in deleted object
...,CN=DC4\0ADEL:650386f2-bc40-45ba-b652-222baa646a96,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=campus,DC=sertao,DC=ifrs,DC=edu,DC=br
- <GUID=df8357a0-8331-4c51-9009-82fb0aa23b81>;CN=NTDS
Settings\0ADEL:df8357a0-8331-4c51-9009-82fb0aa23b81,CN=DC3\0ADEL:91e4f5fd-4976-49f7-8ba7-f7660e0aa1b4,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=campus,DC=sertao,DC=ifrs,DC=edu,DC=br
Not removing dangling one-way link on deleted object (tombstone garbage
collection in progress?)
WARNING: no target object found for GUID component link fromServer in
deleted object...
2023 Apr 04
1
WARNING: no target object found for GUID component link lastKnownParent in deleted object
...f2-bc40-45ba-b652-222baa646a96,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=campus,DC=sertao,DC=ifrs,DC=edu,DC=br
> > - <GUID=df8357a0-8331-4c51-9009-82fb0aa23b81>;CN=NTDS
> > Settings\0ADEL:df8357a0-8331-4c51-9009-82fb0aa23b81,CN=DC3\0ADEL:91e4f5fd-4976-49f7-8ba7-f7660e0aa1b4,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=campus,DC=sertao,DC=ifrs,DC=edu,DC=br
> > Not removing dangling one-way link on deleted object (tombstone garbage
> > collection in progress?)
> > WARNING: no target object found for GUID component lin...
2017 Jul 22
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
.../network.ping-timeout: 30/
> /performance.strict-o-direct: on/
> /cluster.granular-entry-heal: on/
> /auth.allow: */
> /server.allow-insecure: on/
>
>
>
>
>
> /Volume Name: engine/
> /Type: Replicate/
> /Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515/
> /Status: Started/
> /Snapshot Count: 0/
> /Number of Bricks: 1 x 3 = 3/
> /Transport-type: tcp/
> /Bricks:/
> /Brick1: gdnode01:/gluster/engine/brick/
> /Brick2: gdnode02:/gluster/engine/brick/
> /Brick3: gdnode04:/gluster/engi...
2017 Jul 25
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...are seeing.
>
Hi,
You talking about errors like these?
2017-07-24 15:54:02,209+02 WARN [org.ovirt.engine.core.vdsbro
ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
Could not associate brick 'gdnode01:/gluster/engine/brick' of volume
'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster
network found in cluster '00000002-0002-0002-0002-00000000017a'
How to assign "glusternw (???)" to the correct interface?
Other errors on unsync gluster elements still remain... This is a
production env, so, there is any chan...
2013 Oct 10
12
What's the best way to approach reading and parse large XLSX files?
...il to rubyonrails-talk+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To post to this group, send email to rubyonrails-talk-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To view this discussion on the web visit https://groups.google.com/d/msgid/rubyonrails-talk/bc470d4d-19c4-4969-8ba7-4ead7a35d40c%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
2017 Dec 11
6
Update samba and Debian
...89
Looking for DNS entry SRV _ldap._tcp.dc._msdcs.eccmg.cupet.cu ccmg7.eccmg.cupet.cu 389 as _ldap._tcp.dc._msdcs.eccmg.cupet.cu.
Checking 0 100 389 ccmg7.eccmg.cupet.cu. against SRV _ldap._tcp.dc._msdcs.eccmg.cupet.cu ccmg7.eccmg.cupet.cu 389
Looking for DNS entry SRV _ldap._tcp.4f2a2c15-b049-4139-8ba7-a827147dfd14.domains._msdcs.eccmg.cupet.cu ccmg7.eccmg.cupet.cu 389 as _ldap._tcp.4f2a2c15-b049-4139-8ba7-a827147dfd14.domains._msdcs.eccmg.cupet.cu.
Checking 0 100 389 ccmg7.eccmg.cupet.cu. against SRV _ldap._tcp.4f2a2c15-b049-4139-8ba7-a827147dfd14.domains._msdcs.eccmg.cupet.cu ccmg7.eccmg.cupet....
2017 Jul 19
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/19/2017 08:02 PM, Sahina Bose wrote:
> [Adding gluster-users]
>
> On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com
> <mailto:jaganz at gmail.com>> wrote:
>
> Hi all,
>
> We have an ovirt cluster hyperconverged with hosted engine on 3
> full replicated node . This cluster have 2 gluster volume:
>
> - data: volume for
2017 Jul 20
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...3 nodes
> 2. Are these 12 files also present in the 3rd data brick?
>
I've checked right now: all files exists in all 3 nodes
> 3. Can you provide the output of `gluster volume info` for the this volume?
>
*Volume Name: engine*
*Type: Replicate*
*Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515*
*Status: Started*
*Snapshot Count: 0*
*Number of Bricks: 1 x 3 = 3*
*Transport-type: tcp*
*Bricks:*
*Brick1: node01:/gluster/engine/brick*
*Brick2: node02:/gluster/engine/brick*
*Brick3: node04:/gluster/engine/brick*
*Options Reconfigured:*
*nfs.disable: on*
*performance.readdir-ahead...
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ou talking about errors like these?
>
> 2017-07-24 15:54:02,209+02 WARN [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode01:/gluster/engine/brick' of volume
> 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster
> network found in cluster '00000002-0002-0002-0002-00000000017a'
>
>
> How to assign "glusternw (???)" to the correct interface?
>
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-sto...
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ard: on*
*user.cifs: off*
*storage.owner-gid: 36*
*features.shard-block-size: 512MB*
*network.ping-timeout: 30*
*performance.strict-o-direct: on*
*cluster.granular-entry-heal: on*
*auth.allow: **
*server.allow-insecure: on*
*Volume Name: engine*
*Type: Replicate*
*Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515*
*Status: Started*
*Snapshot Count: 0*
*Number of Bricks: 1 x 3 = 3*
*Transport-type: tcp*
*Bricks:*
*Brick1: gdnode01:/gluster/engine/brick*
*Brick2: gdnode02:/gluster/engine/brick*
*Brick3: gdnode04:/gluster/engine/brick*
*Options Reconfigured:*
*nfs.disable: on*
*performance.readdir...
2017 Jul 21
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
>
> But it does say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
> between your mount and the brick process, leading to pending heals again
>
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...data brick?
>
>
> I've checked right now: all files exists in all 3 nodes
>
> 3. Can you provide the output of `gluster volume info` for the
> this volume?
>
>
>
> /Volume Name: engine/
> /Type: Replicate/
> /Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515/
> /Status: Started/
> /Snapshot Count: 0/
> /Number of Bricks: 1 x 3 = 3/
> /Transport-type: tcp/
> /Bricks:/
> /Brick1: node01:/gluster/engine/brick/
> /Brick2: node02:/gluster/engine/brick/
> /Brick3: node04:/gluster/engine/bri...
2017 Jul 19
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
[Adding gluster-users]
On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com> wrote:
> Hi all,
>
> We have an ovirt cluster hyperconverged with hosted engine on 3 full
> replicated node . This cluster have 2 gluster volume:
>
> - data: volume for the Data (Master) Domain (For vm)
> - engine: volume fro the hosted_storage Domain (for hosted engine)
>
>
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...data brick?
>>
>
> I've checked right now: all files exists in all 3 nodes
>
>
>> 3. Can you provide the output of `gluster volume info` for the this
>> volume?
>>
>
>
> *Volume Name: engine*
> *Type: Replicate*
> *Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515*
> *Status: Started*
> *Snapshot Count: 0*
> *Number of Bricks: 1 x 3 = 3*
> *Transport-type: tcp*
> *Bricks:*
> *Brick1: node01:/gluster/engine/brick*
> *Brick2: node02:/gluster/engine/brick*
> *Brick3: node04:/gluster/engine/brick*
> *Options Reconfigured:*...
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...all files exists in all 3 nodes
>>
>> 3. Can you provide the output of `gluster volume info` for
>> the this volume?
>>
>>
>>
>> /Volume Name: engine/
>> /Type: Replicate/
>> /Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515/
>> /Status: Started/
>> /Snapshot Count: 0/
>> /Number of Bricks: 1 x 3 = 3/
>> /Transport-type: tcp/
>> /Bricks:/
>> /Brick1: node01:/gluster/engine/brick/
>> /Brick2: node02:/gluster/en...
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next