similar to: Speex encoding/decoding producing garbled audio

Displaying 20 results from an estimated 400 matches similar to: "Speex encoding/decoding producing garbled audio"

2005 Jan 05
4
Encoding and decoding problem in speex 1.0.4
Hi, I am using the speex 1.0.4 library from Windows. I have posted my problem before but didn't get a solution. I am doing an VOIP project in which i am recording sound and streaming it to the peer. I wanted to encode and decode wav files that brought me to this site. I am recording sound in the following format:- m_WaveFormatEx.wFormatTag = WAVE_FORMAT_PCM;
2004 Sep 14
0
Speex encoding/decoding producing garbled audio
Whoops, left this message in my outbox. I managed to fix the problem. Apparently I was only copying 160 bytes (Frame Size) back into the audio stream when I should have been copying 320 (chars <-> shorts confused me there). Hence why I could hear myself yet it was distorted. Half the wav was missing =) To answer some of the other questions here, for any insight into what I'm doing:
2005 Jan 11
0
Decoding producing garbled sound
Hi, I am back. Thanks everybody for their suggestions. Finally I could get something out of the decoder that makes sense. But the problem is it is highly noisy and the voice is not very clear. Normally without using the codec I get good audible sound through the Microphone. I'm using Speex version 1.1.6. I've also used 1.0.4 beforehand and experienced the same problem with it. 1.
2010 Mar 03
2
uint decode error on visual studio...
Is this a common warning? The decoder doesn't return an error on it, but I see it a lot in my test application on windows. It is non existent on my linux box. I haven't tried mingw yet. please note that I'm using visual studio 2008 w/the vcproj that Bjoern Rasmussen made for 0.5.2 (w/some file references removed) at the moment and it is giving a lot of C4554 warnings
2012 Dec 18
2
multi stream decode
Hi, I don't understand how works the multi stream api in opus. I need to send two mono streams over network with RTP. I think I'm right when I create an OpusMsDecoder with opus_multistream_decoder_create (48000, 2, 2 ,0 ,mapping, NULL) where mapping is: unsigned char mapping[2] = {0,1} isn't it ? Next, i need to encode data which I get from jack (float) so I use
2011 Jan 11
1
Bonding performance question
I have a Dell server with four bonded, gigabit interfaces. Bonding mode is 802.3ad, xmit_hash_policy=layer3+4. When testing this setup with iperf, I never get more than a total of about 3Gbps throughput. Is there anything to tweak to get better throughput? Or am I running into other limits (e.g. was reading about tcp retransmit limits for mode 0). The iperf test was run with iperf -s on the
2011 Nov 17
1
Just getting noise
I'm only doing one frame using speex_encode_int greatly simplifies my code I'm not sure why the sample I was working off of was converting the shorts to floats then calling the other encode/decode methods. Based off of your suggestions I tried the following but I get the same result. virtual Enigma::u8* Encode(Enigma::u8* inputBuffer,size_t inputSize, size_t& outputSize) {
2011 Nov 16
2
Just getting noise
Alright noted, I changed me code so that the state is created in the constructor and destroyed in the destructor of the object. However I'm still getting the same issue although I'm sure that would have bit me sooner or later. The new code is as follows. virtual Enigma::u8* Encode(Enigma::u8* inputBuffer,size_t inputSize, size_t& outputSize) { short *in=(short*)inputBuffer;
2006 Jun 21
1
Expected network throughput
Hi, I have just started to work with Xen and have a question regarding the expected network throughput. Here is my configuration: Processor: 2.8 GHz Intel Celeron (Socket 775) Motherboard: Gigabyte 8I865GVMF-775 Memory: 1.5 GB Basic system: Kubuntu 6.06 Dapper Drake Xen version: 3.02 (Latest 3.0 stable download) I get the following iperf results: Src Dest Throughput Dom0 Dom0
2010 Aug 03
1
performance with libvirt and kvm
Hi, I am seeing a performance degradation while using libvirt to start my vm (kvm). vm is fedora 12 and host is also fedora 12, both with 2.6.32.10-90.fc12.i686. Here are the statistics from iperf : >From VM: [ 3] 0.0-30.0 sec 199 MBytes 55.7 Mbits/sec >From host : [ 3] 0.0-30.0 sec 331 MBytes 92.6 Mbits/sec libvirt command as seen from ps output : /usr/bin/qemu-kvm -S -M
2008 Jan 31
10
QoS Sample config ?
Hi I am search a sample config for my linux box: Shorewall 3.2.3 Eth0 => Internet Access 4Mbits on ethernet Eth1 => Lan Eth2 => Lan 2 Eth3 => Lan 3 i want limit the internet access: Eth1 = 2 Mbits Eth2 = 0,5 Mbits Eth3 = 1,5 Mbits but if eth1 don''t use 2 Mbits other lan can use it anyone have a simple sample config for help me ? Thanks bye
2000 Jun 07
1
Samba performance problem over a radio link.
Hi all. We have a performance problem over a 2 Mbits radio link. The configuration is as follows: we have a RH Linux 6.1 (double PIII processor, 100 mbit ethernet, raid 5), acting as a server for a network of Windows 95, 98 and NT. On the local network I can achieve up to 40 mbits transferring large files from the server to a windows 98 machine. This network is connected through a 2 mbits radio
2014 Aug 17
2
[PATCH] vhost: Add polling mode
> > > > Hi Michael, > > > > Sorry for the delay, had some problems with my mailbox, and I realized > > just now that > > my reply wasn't sent. > > The vm indeed ALWAYS utilized 100% cpu, whether polling was enabled or > > not. > > The vhost thread utilized less than 100% (of the other cpu) when polling > > was disabled. >
2014 Aug 17
2
[PATCH] vhost: Add polling mode
> > > > Hi Michael, > > > > Sorry for the delay, had some problems with my mailbox, and I realized > > just now that > > my reply wasn't sent. > > The vm indeed ALWAYS utilized 100% cpu, whether polling was enabled or > > not. > > The vhost thread utilized less than 100% (of the other cpu) when polling > > was disabled. >
2006 Apr 04
9
Very slow domU network performance
I set up a domU as a backup server, but it has very, very poor network performance with external computers. I ran some tests with iperf and found some very weird results. Using iperf, I get these approximate numbers (the left column is the iperf client and the right column is the iperf server): domU --> domU 1.77 Gbits/sec (using 127.0.0.1) domU --> domU 1.85 Gbits/sec (using domU
2006 Apr 04
9
Very slow domU network performance
I set up a domU as a backup server, but it has very, very poor network performance with external computers. I ran some tests with iperf and found some very weird results. Using iperf, I get these approximate numbers (the left column is the iperf client and the right column is the iperf server): domU --> domU 1.77 Gbits/sec (using 127.0.0.1) domU --> domU 1.85 Gbits/sec (using domU
2005 Jul 13
2
Bandwidth shaping and ISP''s network peerings
Hello all! I have a small LAN at home and when someone starts to download (only one), interractive traffic (www, chat and online games) is impossible with standard kernel queues setup... So I started to shape. My ISP gives me a 512 kbits link to the Internet and a 100 Mbits link to some of the other big ISPs in my country. If I set the rate of the parent htb qdisc at 512 kbits, I will never use
2010 Nov 15
5
Poor performance on bandwidth, Xen 4.0.1 kernel pvops 2.6.32.24
Hello list, I have two differents installation Xen Hypervisor on two identical physical server, on the same switch : The problem is on my new server (Xen 4.0.1 with pvops kernel 2.6.32.24), I have bad performance on bandwidth I have test with a files copy and "iperf". Result iperf average: Transfert Bandwidth XEN-A -> Windows
2006 Jul 05
1
kernel udp rate limit
Hi List. First post, be gentle please. Is there any limit in the linux UDP rate? I am using linux kernel 2.6 and iperf to measure bandwidth between two endpoints connected by 100 Mbits ethernet. Running (as root) iperf -u -s and iperf -u -c always gives me 1.05 Mbits/seg even when runned in the same machine. Can somebody clarify this? Thanks in advance. Sebastian
2014 Aug 19
1
[PATCH] vhost: Add polling mode
> That was just one example. There many other possibilities. Either > actually make the systems load all host CPUs equally, or divide > throughput by host CPU. > The polling patch adds this capability to vhost, reducing costly exit overhead when the vm is loaded. In order to load the vm I ran netperf with msg size of 256: Without polling: 2480 Mbits/sec, utilization: vm - 100%