Displaying 7 results from an estimated 7 matches for "47a3".
Did you mean:
473
2006 Feb 23
4
Keep getting message in logs that pbx.c cannot find extension context 'default'
...[2470] chan_sip.c: SIP message could not be
handled, bad request: 8FF38834-6E36-4AC7-B762-27AFA6EA84E9@10.0.0.43
Feb 23 07:56:14 NOTICE[2470] pbx.c: Cannot find extension context 'default'
Feb 23 07:56:14 DEBUG[2470] chan_sip.c: SIP message could not be
handled, bad request: 953B85B6-43CA-47A3-9425-93E1D94C46B8@10.0.0.47
*********************
I do not have a default context used in my extensions.conf - I use other
names. Am I required to have a context named 'default'??
Thanks
2016 May 13
2
libvirt + openvswitch, <parameters interfaceid='x'/> seems less-than-useful?
...# ovs-vsctl list bridge
_uuid : 16847994-eb75-4e71-a913-50edd8a89252
mirrors : [bfc10d05-846e-4653-8417-27e1f648da93]
name : "malware0"
ports : [1c09dd43-52d0-449b-81a2-537ddafb4966,
6c6e3d97-d55b-4d55-8179-302412242664, f90820f9-056f-47a3-bd51-c5190ad1df46]
2016 May 14
0
Re: libvirt + openvswitch, <parameters interfaceid='x'/> seems less-than-useful?
...gt; _uuid : 16847994-eb75-4e71-a913-50edd8a89252
> mirrors : [bfc10d05-846e-4653-8417-27e1f648da93]
> name : "malware0"
> ports : [1c09dd43-52d0-449b-81a2-537ddafb4966,
> 6c6e3d97-d55b-4d55-8179-302412242664, f90820f9-056f-47a3-bd51-c5190ad1df46]
>
>
>
> _______________________________________________
> libvirt-users mailing list
> libvirt-users@redhat.com
> https://www.redhat.com/mailman/listinfo/libvirt-users
>
2017 Sep 11
0
Gluster command performance - Setting volume options slow
...-02
N/A N/A N N/A
Task Status of Volume dashingdev_storage
--------------------------------------------------------------------
----------There are no active volume tasks
Volume Name: dashingdev_storage
Type: Replicate
Volume ID: 22937936-2e28-47a3-b65d-cfc9b2c0d069
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: int-gluster-01:/mnt/gluster-storage/dashingdev_storage
Brick2: int-gluster-02:/mnt/gluster-storage/dashingdev_storage
Brick3: int-gluster-03:/mnt/gluster-storage/dashingdev_storage
Op...
2017 Oct 26
0
not healing one file
Hey Richard,
Could you share the following informations please?
1. gluster volume info <volname>
2. getfattr output of that file from all the bricks
getfattr -d -e hex -m . <brickpath/filepath>
3. glustershd & glfsheal logs
Regards,
Karthik
On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote:
> On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it
does diagnose any issues in setup. Currently you may have to run it in all
the three machines.
On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> Thanks for this report. This week many of the developers are at Gluster
> Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on 01bfe795-8fc3-493a-925c-925ef0a1b4c3. sources=0 [2] sinks=1
[2017-10-25 10:40:22.688603] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on c23bf112-2ad6-47a3-878f-fb2cbd115375. sources=0 [2] sinks=1
[2017-10-25 10:40:22.690319] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on c23bf112-2ad6-47a3-878f-fb2cbd115375
[2017-10-25 10:40:22.693674] I [MSGID: 108026] [afr-self-heal-c...