Displaying 20 results from an estimated 23 matches for "437b".
Did you mean:
437
2017 Jul 24
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...d='4c89baa5-e8f7-4132-a4b3-af332247570c'}), log id: 7fce25d3
2017-07-24 15:54:02,209+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gdnode01:/gluster/engine/brick' of volume 'd19c19e3-910d-437b-8ba7-
4f2a23d17515' with correct network as no gluster network found in cluster
'00000002-0002-0002-0002-00000000017a'
2017-07-24 15:54:02,212+02 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn]
(DefaultQuartzScheduler2) [b7590c4] Could not associate brick
'gd...
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...132-a4b3-af332247570c'}), log id: 7fce25d3
> 2017-07-24 15:54:02,209+02 WARN [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode01:/gluster/engine/brick' of volume
> 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster
> network found in cluster '00000002-0002-0002-0002-00000000017a'
> 2017-07-24 15:54:02,212+02 WARN [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not as...
2017 Jul 24
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...ures.shard-block-size: 512MB*
> *network.ping-timeout: 30*
> *performance.strict-o-direct: on*
> *cluster.granular-entry-heal: on*
> *auth.allow: **
> *server.allow-insecure: on*
>
>
>
>
>
> *Volume Name: engine*
> *Type: Replicate*
> *Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515*
> *Status: Started*
> *Snapshot Count: 0*
> *Number of Bricks: 1 x 3 = 3*
> *Transport-type: tcp*
> *Bricks:*
> *Brick1: gdnode01:/gluster/engine/brick*
> *Brick2: gdnode02:/gluster/engine/brick*
> *Brick3: gdnode04:/gluster/engine/brick*
> *Options Rec...
2020 May 12
2
Re: Unit libvirtd.service could not be found. on VM
...hat didn't solve the issue. I then checked the vms-
libvirt-daemon rpm was indeed missing on my vms. After I installed it &
reloaded its unit files libvirtd.service was found, but as I started it,
the error 'operation failed: pool 'default' already exists with uuid
a42beb54-839e-437b-a48e-d06f6100205c' appeared again on my laptop.
I'm not sure if I was supposed to install libvirt-daemon rpm on the vms? if
it was needed - how do I resolve the error now? and any idea why it was
missing? I never had to install it before
if not - if you have any other thoughts/suggestions I...
2017 Jul 22
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...t; /network.ping-timeout: 30/
> /performance.strict-o-direct: on/
> /cluster.granular-entry-heal: on/
> /auth.allow: */
> /server.allow-insecure: on/
>
>
>
>
>
> /Volume Name: engine/
> /Type: Replicate/
> /Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515/
> /Status: Started/
> /Snapshot Count: 0/
> /Number of Bricks: 1 x 3 = 3/
> /Transport-type: tcp/
> /Bricks:/
> /Brick1: gdnode01:/gluster/engine/brick/
> /Brick2: gdnode02:/gluster/engine/brick/
> /Brick3: gdnode04:/gluster...
2017 Jul 25
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...you are seeing.
>
Hi,
You talking about errors like these?
2017-07-24 15:54:02,209+02 WARN [org.ovirt.engine.core.vdsbro
ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
Could not associate brick 'gdnode01:/gluster/engine/brick' of volume
'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster
network found in cluster '00000002-0002-0002-0002-00000000017a'
How to assign "glusternw (???)" to the correct interface?
Other errors on unsync gluster elements still remain... This is a
production env, so, there is any...
2017 Jun 15
2
asterisk 13.16 / pjsip / t.38: res_pjsip_t38.c:207 t38_automatic_reject: Automatically rejecting T.38 request on channel 'PJSIP/91-00000007'
...tting SIP request (980 bytes) to UDP:192.168.10.33:6060 --->
INVITE sip:91 at 192.168.10.33:6060 SIP/2.0
Via: SIP/2.0/UDP 192.168.10.33:5061;rport;branch=z9hG4bKPj201aee1c-20a7-4fe9-b08c-9ec58037f140
From: "CID:+4922222222222" <sip:111111111111 at 192.168.10.33>;tag=d3816d6b-4a00-437b-a525-c2de0f0c3227
To: "root" <sip:91 at 192.168.10.33>;tag=9e9ea185-ea4f-e711-9f85-000db9330d98
Contact: <sip:192.168.10.33:5061>
Call-ID: 48b8a185-ea4f-e711-9f85-000db9330d98 at myfw
CSeq: 24420 INVITE
Allow: OPTIONS, SUBSCRIBE, NOTIFY, PUBLISH, INVITE, ACK, BYE, CANCEL, UPDA...
2017 Jun 14
2
asterisk 13.16 / pjsip / t.38: res_pjsip_t38.c:207 t38_automatic_reject: Automatically rejecting T.38 request on channel 'PJSIP/91-00000007'
On 06/14/2017 at 05:53 PM Joshua Colp wrote:
> On Wed, Jun 14, 2017, at 12:47 PM, Michael Maier wrote:
>
> <snip>
>
>>
>> I added this patch to see, if really all packages are are freed after
>> they have been processed:
>>
>> --- b/res/res_pjsip/pjsip_distributor.c 2017-05-30 19:44:16.000000000
>> +0200
>> +++
2017 Jul 19
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/19/2017 08:02 PM, Sahina Bose wrote:
> [Adding gluster-users]
>
> On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com
> <mailto:jaganz at gmail.com>> wrote:
>
> Hi all,
>
> We have an ovirt cluster hyperconverged with hosted engine on 3
> full replicated node . This cluster have 2 gluster volume:
>
> - data: volume for
2017 Jul 20
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...all 3 nodes
> 2. Are these 12 files also present in the 3rd data brick?
>
I've checked right now: all files exists in all 3 nodes
> 3. Can you provide the output of `gluster volume info` for the this volume?
>
*Volume Name: engine*
*Type: Replicate*
*Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515*
*Status: Started*
*Snapshot Count: 0*
*Number of Bricks: 1 x 3 = 3*
*Transport-type: tcp*
*Bricks:*
*Brick1: node01:/gluster/engine/brick*
*Brick2: node02:/gluster/engine/brick*
*Brick3: node04:/gluster/engine/brick*
*Options Reconfigured:*
*nfs.disable: on*
*performance.readdir-...
2020 May 12
0
Re: Unit libvirtd.service could not be found. on VM
...the issue. I then checked the vms-
> libvirt-daemon rpm was indeed missing on my vms. After I installed it &
> reloaded its unit files libvirtd.service was found, but as I started it,
> the error 'operation failed: pool 'default' already exists with uuid
> a42beb54-839e-437b-a48e-d06f6100205c' appeared again on my laptop.
> I'm not sure if I was supposed to install libvirt-daemon rpm on the vms? if
> it was needed - how do I resolve the error now? and any idea why it was
> missing? I never had to install it before
> if not - if you have any other th...
2017 Jul 25
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...gt; You talking about errors like these?
>
> 2017-07-24 15:54:02,209+02 WARN [org.ovirt.engine.core.vdsbro
> ker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler2) [b7590c4]
> Could not associate brick 'gdnode01:/gluster/engine/brick' of volume
> 'd19c19e3-910d-437b-8ba7-4f2a23d17515' with correct network as no gluster
> network found in cluster '00000002-0002-0002-0002-00000000017a'
>
>
> How to assign "glusternw (???)" to the correct interface?
>
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluste...
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...es.shard: on*
*user.cifs: off*
*storage.owner-gid: 36*
*features.shard-block-size: 512MB*
*network.ping-timeout: 30*
*performance.strict-o-direct: on*
*cluster.granular-entry-heal: on*
*auth.allow: **
*server.allow-insecure: on*
*Volume Name: engine*
*Type: Replicate*
*Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515*
*Status: Started*
*Snapshot Count: 0*
*Number of Bricks: 1 x 3 = 3*
*Transport-type: tcp*
*Bricks:*
*Brick1: gdnode01:/gluster/engine/brick*
*Brick2: gdnode02:/gluster/engine/brick*
*Brick3: gdnode04:/gluster/engine/brick*
*Options Reconfigured:*
*nfs.disable: on*
*performance.re...
2017 Jul 21
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>:
>
> But it does say something. All these gfids of completed heals in the log
> below are the for the ones that you have given the getfattr output of. So
> what is likely happening is there is an intermittent connection problem
> between your mount and the brick process, leading to pending heals again
>
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...e 3rd data brick?
>
>
> I've checked right now: all files exists in all 3 nodes
>
> 3. Can you provide the output of `gluster volume info` for the
> this volume?
>
>
>
> /Volume Name: engine/
> /Type: Replicate/
> /Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515/
> /Status: Started/
> /Snapshot Count: 0/
> /Number of Bricks: 1 x 3 = 3/
> /Transport-type: tcp/
> /Bricks:/
> /Brick1: node01:/gluster/engine/brick/
> /Brick2: node02:/gluster/engine/brick/
> /Brick3: node04:/gluster/engin...
2020 May 12
2
Unit libvirtd.service could not be found. on VM
Hi all,
Some background:
I recently has some issues with libvirt on my laptop when I got the error
'operation failed: pool 'default' already exists with uuid
dd48b6ad-9a00-46eb-a3a4-c122d8a294a5' when I connected virt-manager. I was
finally able to resolve it yesterday, when I removed libvirt and all its
related content in /etc/libvirt, removed the pool by its UUID, deleted
virbr0
2020 May 12
3
Re: Unit libvirtd.service could not be found. on VM
...hecked the vms-
> > libvirt-daemon rpm was indeed missing on my vms. After I installed it &
> > reloaded its unit files libvirtd.service was found, but as I started it,
> > the error 'operation failed: pool 'default' already exists with uuid
> > a42beb54-839e-437b-a48e-d06f6100205c' appeared again on my laptop.
> > I'm not sure if I was supposed to install libvirt-daemon rpm on the vms?
> if
> > it was needed - how do I resolve the error now? and any idea why it was
> > missing? I never had to install it before
> > if not -...
2017 Jul 19
2
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
[Adding gluster-users]
On Wed, Jul 19, 2017 at 2:52 PM, yayo (j) <jaganz at gmail.com> wrote:
> Hi all,
>
> We have an ovirt cluster hyperconverged with hosted engine on 3 full
> replicated node . This cluster have 2 gluster volume:
>
> - data: volume for the Data (Master) Domain (For vm)
> - engine: volume fro the hosted_storage Domain (for hosted engine)
>
>
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...3rd data brick?
>>
>
> I've checked right now: all files exists in all 3 nodes
>
>
>> 3. Can you provide the output of `gluster volume info` for the this
>> volume?
>>
>
>
> *Volume Name: engine*
> *Type: Replicate*
> *Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515*
> *Status: Started*
> *Snapshot Count: 0*
> *Number of Bricks: 1 x 3 = 3*
> *Transport-type: tcp*
> *Bricks:*
> *Brick1: node01:/gluster/engine/brick*
> *Brick2: node02:/gluster/engine/brick*
> *Brick3: node04:/gluster/engine/brick*
> *Options Reconfigu...
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
...now: all files exists in all 3 nodes
>>
>> 3. Can you provide the output of `gluster volume info` for
>> the this volume?
>>
>>
>>
>> /Volume Name: engine/
>> /Type: Replicate/
>> /Volume ID: d19c19e3-910d-437b-8ba7-4f2a23d17515/
>> /Status: Started/
>> /Snapshot Count: 0/
>> /Number of Bricks: 1 x 3 = 3/
>> /Transport-type: tcp/
>> /Bricks:/
>> /Brick1: node01:/gluster/engine/brick/
>> /Brick2: node02:/glust...