similar to: Bad soundquality on inbound calls.

Displaying 20 results from an estimated 700 matches similar to: "Bad soundquality on inbound calls."

2006 Apr 13
1
call center running Asterisk -soundquality-critical!
I did not install soxmix in my linux box. If you having issues with mixmonitor, you can put both legs of the call into a conference and record the conference -----Original Message----- From: asterisk-users-bounces@lists.digium.com [mailto:asterisk-users-bounces@lists.digium.com] On Behalf Of Matt Roth Sent: Thursday, April 13, 2006 1:20 PM To: Asterisk Users Mailing List - Non-Commercial
2006 Apr 13
1
call center running Asterisk-soundquality-critical!
I just check the source code, Monitor uses ast_writestream and it eventurally goes down to au_write, g723_write, etc. They don't commit to the disk. So, in effect, if you have a lot of ram, the audio should stay in ram until it gets swap out or the file is closed. -----Original Message----- From: asterisk-users-bounces@lists.digium.com [mailto:asterisk-users-bounces@lists.digium.com] On
2023 Feb 20
1
Gluster 11.0 upgrade
I made a recusive diff on the upgraded arbiter. /var/lib/glusterd/vols/gds-common is the upgraded aribiter /home/marcus/gds-common is one of the other nodes still on gluster 10 diff -r /var/lib/glusterd/vols/gds-common/bricks/urd-gds-030:-urd-gds-gds-common /home/marcus/gds-common/bricks/urd-gds-030:-urd-gds-gds-common 5c5 < listen-port=60419 --- > listen-port=0 11c11 <
2006 Mar 22
0
Help! Directing Inbound calls to different extensions
OK, Asterisk Newbie I've read TFOT and the Asterisk handbook and lurked, but my skills are a bit poor so perhaps someone could post a dialplan fragment to help me Brief details Asterisk@home 2.6 installed on a miniITX system Digium 400 card with 3 FXO modules 3FXS interfaces by Iaxy (1FXS) and Linksys PAP2 (Sipura 2002) (2FXS) I started with AMP to get going but have started
2023 Oct 27
1
State of the gluster project
Hi Diego, I have had a look at BeeGFS and is seems more similar to ceph then to gluster. It requires extra management nodes similar to ceph, right? Second of all there are no snapshots in BeeGFS, as I understand it. I know ceph has snapshots so for us this seems a better alternative. What is your experience of ceph? I am sorry to hear about your problems with gluster, from my experience we had
2023 Feb 21
2
Gluster 11.0 upgrade
Hi Xavi, Copy the same info file worked well and the gluster 11 arbiter is now up and running and all the nodes are communication the way they should. Just another note on something I discovered on my virt machines. All the three nodes has been upgarded to 11.0 and are working. If I run: gluster volume get all cluster.op-version I get: Option Value ------
2018 Jul 13
2
Upgrade to 4.1.1 geo-replication does not work
Hi Kotresh, Yes, all nodes have the same version 4.1.1 both master and slave. All glusterd are crashing on the master side. Will send logs tonight. Thanks, Marcus ################ Marcus Peders?n Systemadministrator Interbull Centre ################ Sent from my phone ################ Den 13 juli 2018 11:28 skrev Kotresh Hiremath Ravishankar <khiremat at redhat.com>: Hi Marcus, Is the
2023 Oct 27
2
State of the gluster project
Hi all, I just have a general thought about the gluster project. I have got the feeling that things has slowed down in the gluster project. I have had a look at github and to me the project seems to slow down, for gluster version 11 there has been no minor releases, we are still on 11.0 and I have not found any references to 11.1. There is a milestone called 12 but it seems to be stale. I have hit
2021 Nov 29
1
Gluster 10 used ports
Hi all, Over the years I have been using the same ports in my firewall for gluster 49152-49251 ( I know a bit too many ports but local network with limited access) Today I upgraded from version 9 to version 10 and it finally went well until I ran: gluster volume heal my-vol info summary I got the answer: Status: Transport endpoint is not connected I realized that glusterfsd was using 50000+
2023 Dec 19
2
Gluster 11 OP version
Hi all, We upgraded to gluster 11.1 and the OP version was fixed in this version, so I changed the OP version to 110000. Now we have some obscure, vague problem. Our users usually run 100+ processes with GNU parallel and now the execution time have increased close to the double. I can see that there are a couple of heals happening every now and then but this do not seem starange to me. Just to
2023 Oct 25
1
Replace faulty host
Hi all, I have a problem with one of our gluster clusters. This is the setup: Volume Name: gds-common Type: Distributed-Replicate Volume ID: 42c9fa00-2d57-4a58-b5ae-c98c349cfcb6 Status: Started Snapshot Count: 26 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: urd-gds-031:/urd-gds/gds-common Brick2: urd-gds-032:/urd-gds/gds-common Brick3: urd-gds-030:/urd-gds/gds-common
2023 Feb 20
2
Gluster 11.0 upgrade
Hi again Xavi, I did some more testing on my virt machines with same setup: Number of Bricks: 1 x (2 + 1) = 3 If I do it the same way, I upgrade the arbiter first, I get the same behavior that the bricks do not start and the other nodes does not "see" the upgraded node. If I upgrade one of the other nodes (non arbiter) and restart glusterd on both the arbiter and the other the arbiter
2023 Oct 27
1
State of the gluster project
Maybe a bit OT... I'm no expert on either, but the concepts are quite similar. Both require "extra" nodes (metadata and monitor), but those can be virtual machines or you can host the services on OSD machines. We don't use snapshots, so I can't comment on that. My experience with Ceph is limited to having it working on Proxmox. No experience yet with CephFS. BeeGFS is
2023 Oct 28
1
State of the gluster project
Well, After IBM acquisition, RH discontinued their support in?many projects including?GlusterFS (certification exams were removed, payed product went EOL, etc). The only way to get it back on track is with a sponsor company that haves the capability to drive it.Kadalu is relying on GlusterFS but they are not as big as Red Hat and based on one of the previous e-mails they will need sponsorship to
2018 Jul 10
0
Geo replication manual rsync
Hi all, I have setup a gluster system with geo replication (Centos 7, gluster 3.12). I have moved about 30 TB to the cluster. It seems that it goes realy show for the data to be syncronized to geo replication. It has been active for weeks and still just 9TB has ended up on the slave side. I pause the replication once a day and make a snapshot with a script. Does this slow things up? Is it possible
2024 Jan 19
1
Heal failure
Hi all, I have a really strange problem with my cluster. Running gluster 10.4, replicated with an arbiter: Number of Bricks: 1 x (2 + 1) = 3 All my files in the system seems fine and I have not found any broken files. Even though I have 40000 files that needs healing, in heal-count. Heal fails for all the files over and over again. If I use heal info I just get a long list of gfids and trying
2023 Dec 14
2
Gluster -> Ceph
Hi all, I am looking in to ceph and cephfs and in my head I am comparing with gluster. The way I have been running gluster over the years is either a replicated or replicated-distributed clusters. The small setup we have had has been a replicated cluster with one arbiter and two fileservers. These fileservers has been configured with RAID6 and that raid has been used as the brick. If disaster
2023 Oct 27
1
Replace faulty host
Hi Markus, It looks quite well documented, but please use?https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/sect-replacing_hosts?as 3.5?is the latest version for RHGS. If the OS disks are failing, I would have tried?moving the data disks to the new machine and transferring the gluster files in /etc and /var/lib to the new node. Any reason to reuse
2023 Oct 27
1
State of the gluster project
Hi, Red Hat Gluster Storage is EOL, Red Hat moved Gluster devs to other projects, so Gluster doesn't get much attention. From my experience, it has deteriorated since about version 9.0, and we're migrating to alternatives. /Z On Fri, 27 Oct 2023 at 10:29, Marcus Peders?n <marcus.pedersen at slu.se> wrote: > Hi all, > I just have a general thought about the gluster >
2023 Oct 27
1
State of the gluster project
It is very unfortunate that Gluster is not maintained. From Kadalu Technologies, we are trying to set up a small team dedicated to maintain GlusterFS for the next three years. This will be only possible if we get funding from community and companies. The details about the proposal is here?https://kadalu.tech/gluster/ About Kadalu Technologies: Kadalu Technologies was started in 2019 by a few