search for: a525

Displaying 6 results from an estimated 6 matches for "a525".

Did you mean: 525
2011 Apr 20
1
add brick unsuccessful
...a-8174-4c6a-9b8c-9fe5ce8e2161 [2011-04-20 12:55:06.945003] I [glusterd-utils.c:2062:glusterd_friend_find_by_uuid] glusterd: Friend found.. state: Peer in Cluster [2011-04-20 12:55:06.945018] I [glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received ACC from uuid: 798bcb59-f25b-4edc-a525-e92272f54391 [2011-04-20 12:55:06.945027] I [glusterd-utils.c:2062:glusterd_friend_find_by_uuid] glusterd: Friend found.. state: Peer in Cluster [2011-04-20 12:55:06.945124] I [glusterd3_1-mops.c:395:glusterd3_1_cluster_lock_cbk] glusterd: Received ACC from uuid: e9a3d2b9-1292-48b8-9029-aedbe7fa837...
2017 Jun 15
2
asterisk 13.16 / pjsip / t.38: res_pjsip_t38.c:207 t38_automatic_reject: Automatically rejecting T.38 request on channel 'PJSIP/91-00000007'
...SIP request (980 bytes) to UDP:192.168.10.33:6060 ---> INVITE sip:91 at 192.168.10.33:6060 SIP/2.0 Via: SIP/2.0/UDP 192.168.10.33:5061;rport;branch=z9hG4bKPj201aee1c-20a7-4fe9-b08c-9ec58037f140 From: "CID:+4922222222222" <sip:111111111111 at 192.168.10.33>;tag=d3816d6b-4a00-437b-a525-c2de0f0c3227 To: "root" <sip:91 at 192.168.10.33>;tag=9e9ea185-ea4f-e711-9f85-000db9330d98 Contact: <sip:192.168.10.33:5061> Call-ID: 48b8a185-ea4f-e711-9f85-000db9330d98 at myfw CSeq: 24420 INVITE Allow: OPTIONS, SUBSCRIBE, NOTIFY, PUBLISH, INVITE, ACK, BYE, CANCEL, UPDATE, P...
2017 Jun 14
2
asterisk 13.16 / pjsip / t.38: res_pjsip_t38.c:207 t38_automatic_reject: Automatically rejecting T.38 request on channel 'PJSIP/91-00000007'
On 06/14/2017 at 05:53 PM Joshua Colp wrote: > On Wed, Jun 14, 2017, at 12:47 PM, Michael Maier wrote: > > <snip> > >> >> I added this patch to see, if really all packages are are freed after >> they have been processed: >> >> --- b/res/res_pjsip/pjsip_distributor.c 2017-05-30 19:44:16.000000000 >> +0200 >> +++
2017 Oct 26
0
not healing one file
Hey Richard, Could you share the following informations please? 1. gluster volume info <volname> 2. getfattr output of that file from all the bricks getfattr -d -e hex -m . <brickpath/filepath> 3. glustershd & glfsheal logs Regards, Karthik On Thu, Oct 26, 2017 at 10:21 AM, Amar Tumballi <atumball at redhat.com> wrote: > On a side note, try recently released health
2017 Oct 26
3
not healing one file
On a side note, try recently released health report tool, and see if it does diagnose any issues in setup. Currently you may have to run it in all the three machines. On 26-Oct-2017 6:50 AM, "Amar Tumballi" <atumball at redhat.com> wrote: > Thanks for this report. This week many of the developers are at Gluster > Summit in Prague, will be checking this and respond next
2017 Oct 26
2
not healing one file
...n.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed metadata selfheal on 958b9f58-71af-4b5f-93aa-a75cddd5c85f. sources=0 [2] sinks=1 [2017-10-25 10:40:29.922866] I [MSGID: 108026] [afr-self-heal-common.c:1327:afr_log_selfheal] 0-home-replicate-0: Completed data selfheal on 3134be41-9cb2-42a8-a525-2883f1825d7b. sources=0 [2] sinks=1 [2017-10-25 10:40:29.927213] I [MSGID: 108026] [afr-self-heal-metadata.c:52:__afr_selfheal_metadata_do] 0-home-replicate-0: performing metadata selfheal on e78c84c8-6f0b-40e3-a0b2-99aeca8ccaa4 [2017-10-25 10:40:29.930053] I [MSGID: 108026] [afr-self-heal-common...