search for: 8f50

Displaying 4 results from an estimated 4 matches for "8f50".

Did you mean: 850
2005 Jun 02
1
Newbie :Call Forwarding problem
...handler -- Executing DBput("SIP/777-ad46", "CF/777=888") in new stack -- DBput: family=CF, key=777, value=888 Urgent handler -- Executing Hangup("SIP/777-ad46", "") in new stack Urgent handler *CLI> *CLI> -- Executing Dial("SIP/999-8f50", "SIP/777|7|tr") in new stack -- Called 777 Urgent handler Urgent handler -- SIP/777-82e9 is ringing Urgent handler Any Idea what's wrong -- Thx MAG -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.digium.com/pipermail/aste...
2005 Jun 01
2
IVR Load
Hi, Thinking about an IVR application and trying to get a handle on the best way to structure it so that the maximum number of concurrent calls can be achieved.. If the voice prompts were stored in a GSM format and were being played out through an IAX trunk that uses GSM compression would asterisk do a decompress/compress on the audio or would it simply pass through the GSM encoding?
2017 Aug 06
1
[3.11.2] Bricks disconnect from gluster with 0-transport: EPOLLERR
...I [MSGID: 106493] [glusterd-rpc-ops.c:700:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: c4e38a4f-f61f-4bf5-aee1-e33e4daf4ef5 [2017-08-06 03:12:39.089579] I [MSGID: 106493] [glusterd-rpc-ops.c:485:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 9b89d233-4cb1-425e-8f50-a13ced4ffbc5, host: 128.138.140.227, port: 0 [2017-08-06 03:12:39.100236] I [MSGID: 106492] [glusterd-handler.c:2717:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 87a12be2-5bb0-47cd-88c4-b2c28c03066a [2017-08-06 03:12:39.100258] I [MSGID: 106502] [glusterd-handler.c...
2007 Dec 09
8
zpool kernel panics.
Hi Folks, I''ve got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris 10 280r (SPARC) server. The message I get on panic is this: panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment (offset=423713792 size=1024) This seems to come about when the zpool is being used or being scrubbed - about twice a day at the moment. After the reboot, the scrub seems to have