Displaying 20 results from an estimated 200 matches similar to: "Call recording with Asterisk BE"
2006 Dec 07
2
queue agent Monitor
Skipped content of type multipart/alternative-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 2915 bytes
Desc: image001.gif
Url : http://lists.digium.com/pipermail/asterisk-users/attachments/20061207/2c5609e6/attachment.gif
2009 Dec 17
2
Integrate a CPE with Asterisk in MGCP
Hello all,
I'm looking for some help to try to understand why my CPE doesn't work
good with Asterisk in MGCP.
Here is what I want to do :
- Register a TECOM AH4021 on Asterisk in MGCP with the following profile
in mgcp.Conf :
[general]
port = 2727
bindaddr = 10.95.20.1
disallow=all
allow=g729
allow=alaw
020202020202]
context=mgcp
host=dynamic
canreinvite=no
dtmfmode=rfc2833
nat=yes
2007 Oct 17
0
FW: DID to hunt group?
Thanks ... I forgot to say I tried it with
priorityjumping=yes
in the [globals] section of extensions.conf
still no go...
Gerald, I'll try your suggestion,
and try to figure out the result code tests :-)
Thanks,
Rich
> -----Original Message-----
> From: Gerald A [mailto:geraldablists at gmail.com]
> Sent: Tuesday, October 16, 2007 23:59
> To: rich at isphone.net
> Subject:
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
Here would be also the corresponding log entries on a gluster node brick log file:
[2018-04-09 06:58:47.363536] W [MSGID: 113093] [posix-gfid-path.c:84:posix_remove_gfid2path_xattr] 0-myvol-private-posix: removing gfid2path xattr failed on /data/myvol-private/brick/.glusterfs/12/67/126759f6-8364-453c-9a9c-d9ed39198b7a: key = trusted.gfid2path.2529bb66b56be110 [No data available]
[2018-04-09
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
Hello,
Last Friday I upgraded my GlusterFS 3.10.7 3-way replica (with arbitrer) cluster to 3.12.7 and this morning I got a warning that 9 files on one of my volumes are not synced. Ineeded checking that volume with a "volume heal info" shows that the third node (the arbitrer node) has 9 files to be healed but are not being healed automatically.
All nodes were always online and there
2009 Jun 30
4
[Bug 1615] New: the pathname length of home directory is limited to less than 256 chars
https://bugzilla.mindrot.org/show_bug.cgi?id=1615
Summary: the pathname length of home directory is limited to
less than 256 chars
Product: Portable OpenSSH
Version: 5.2p1
Platform: Other
OS/Version: Linux
Status: NEW
Severity: normal
Priority: P2
Component: ssh
AssignedTo:
2006 Jul 14
0
qla2xxx driver failed in dom0 - invalid opcode: 0000 [1] SMP
I wanted to post this in case there is a real bug here.
The system this is on is running Debian Etch with the Xen packages from
Debian Unstable (currently 3.0.2+hg9697-1). I am running all packaged
software so this is probably slightly out of date.
This is on an HP DL320 G4 with a Pentium D 930 processor. I tried unloading
and reloading the qla2xxx module, but rmmod reported it in use, however
2006 Jul 14
0
RE: qla2xxx driver failed in dom0 - invalid opcode: 0000[1] SMP
> I wanted to post this in case there is a real bug here.
>
> The system this is on is running Debian Etch with the Xen packages
from
> Debian Unstable (currently 3.0.2+hg9697-1). I am running all packaged
> software so this is probably slightly out of date.
It would be interesting to see if this can be repro''ed on a tree built
from a recent xen-unstable.hg.
2009 Mar 21
0
Bug#520641: Cannot create HVM domain
Package: xen-utils-3.2-1
Version: 3.2.1-2
If I try to create an HVM domain I get the following error message:
araminta:~# xm create -c heceptor.cfg
Using config file "/etc/xen/heceptor.cfg".
Error: Creating domain failed: name=heceptor
xend-log has a Python backtrace in it:
[2009-03-21 14:14:46 25927] DEBUG (XendDomainInfo:84)
XendDomainInfo.create(['vm', ['name',
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:40 PM, mabi wrote:
> Again thanks that worked and I have now no more unsynched files.
>
> You mentioned that this bug has been fixed in 3.13, would it be possible to backport it to 3.12? I am asking because 3.13 is not a long-term release and as such I would not like to have to upgrade to 3.13.
I don't think there will be another 3.12 release. Adding Karthik to see
2007 Sep 14
3
[LLVMdev] Problem of running data structure analysis (DSA) on Linux kernel
Hi,
I ran into a problem when running DSA on Linux kernel (the Kernel
version I used is
2.4.31). The analysis was aborted when it tried to do
DSNode::mergeTypeInfo on some data structure in the kernel. I have
filed a bug report at http://llvm.org/bugs/show_bug.cgi?id=1656.
My question is what version of Linux kernel LLVM has been tested on
successfully? To run DSA analysis, should I use the
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 04:36 PM, mabi wrote:
> As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
>
> NODE1:
>
> STAT:
> File:
2018 Apr 09
0
New 3.12.7 possible split-brain on replica 3
On 04/09/2018 05:09 PM, mabi wrote:
> Thanks Ravi for your answer.
>
> Stupid question but how do I delete the trusted.afr xattrs on this brick?
>
> And when you say "this brick", do you mean the brick on the arbitrer node (node 3 in my case)?
Sorry I should have been clearer. Yes the brick on the 3rd node.
`setfattr -x trusted.afr.myvol-private-client-0
2005 May 17
1
open ports confusion
I''m showing some wierd open ports, considering I only have two allow
rules: AllowSSH & AllowAuth
neverneverland:/# nmap localhost
Starting nmap 3.81 ( http://www.insecure.org/nmap/ ) at 2005-05-17 23:49
CDT
Interesting ports on neverneverland (127.0.0.1):
(The 1656 ports scanned but not shown below are in state: closed)
PORT STATE SERVICE
9/tcp open discard
13/tcp open
2007 Sep 14
0
[LLVMdev] Problem of running data structure analysis (DSA) on Linux kernel
On 9/13/07, Haifeng He <hehaifeng2nd at gmail.com> wrote:
> Hi,
>
> I ran into a problem when running DSA on Linux kernel (the Kernel
> version I used is
> 2.4.31). The analysis was aborted when it tried to do
> DSNode::mergeTypeInfo on some data structure in the kernel. I have
> filed a bug report at http://llvm.org/bugs/show_bug.cgi?id=1656.
It is possible there was a
2010 Feb 23
0
[LLVMdev] how to build eglibc using llvm-gcc without unsupported -fno-toplevel-reorder
> I agree, impact of issue is limited. But it prevents out of the box
> compilation of libraries for some targets.
> Also, looks like glibc and eglibc maintainers do not welcome patches
> for llvm (yet).
I would be very surprised if glibc ever does. I don't have any
experience with eglibc.
> In general, saving order of appearance doesn't seem to be bad thing.
> Are
2012 Sep 24
1
Slow on one client Win7 program
Got one program that is running very, very slow on version 3.6.8.
Using SMB2, logging level 3 I saw a lot of these:
[2012/09/24 23:44:43.824970, 3] smbd/smb2_read.c:356(smb2_read_complete)
smbd_smb2_read: fnum=[8523/filename] length=2 offset=1656 read=2
[2012/09/24 23:44:43.825499, 3] lib/util.c:1498(fcntl_getlock)
fcntl_getlock: fd 34 is returned info 2 pid 0
Seems the files are read in
2002 Feb 15
1
Delete & rename with rsync servers
Hello
Is it possible to delete or rename
single files on remote rsync servers?
I seem to be having problems getting
normal upload to work too:
shellak root # ps aux | grep [r]sync
shellak root # rsync --config=/etc/rsync/rsyncd.conf --daemon
shellak root # ps aux | grep [r]sync
root 27527 0.0 0.3 1656 620 ? S 09:02 0:00 rsync --config=/etc/rsync/rsyncd.conf --daemon
shellak
2018 Jan 26
0
parallel-readdir is not recognized in GlusterFS 3.12.4
can you please test parallel-readdir or readdir-ahead gives
disconnects? so we know which to disable
parallel-readdir doing magic ran on pdf from last year
https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf
-v
On Thu, Jan 25, 2018 at 8:20 AM, Alan Orth <alan.orth at gmail.com> wrote:
> By the way, on a slightly related note, I'm pretty
2018 Apr 09
2
New 3.12.7 possible split-brain on replica 3
As I was suggested in the past by this mailing list a now ran a stat and getfattr on one of the problematic files on all nodes and at the end a stat on the fuse mount directly. The output is below:
NODE1:
STAT:
File: ?/data/myvol-private/brick/dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10/dir11/dir12_Archiv/azipfiledir.zip/OC_DEFAULT_MODULE/problematicfile?
Size: 0 Blocks: 38