similar to: Celt 0.7.1 High complexity VS Low complexity

Displaying 20 results from an estimated 1000 matches similar to: "Celt 0.7.1 High complexity VS Low complexity"

2011 Feb 15
1
CELT decoder complexity
Hi, We're using Celt 0.7.1 at the moment. We're thinking to update the code to a newer version of it. Is there an appreciable complexity (decoding time) difference among versions (0.7.1 - 0.8.1 - 0.9.1 - 0.10 - 0.11.1)? If so, which one is the fastest? Thanks Regards Riccardo Riccardo Micci Senior DSP Engineer, Wireless Group Cambridge Consultants Science Park, Milton Road
2010 Aug 20
1
CELT complexity question
Hi, I'm testing CELT 0.7.1 speed performance and I'm focusing now on the complexity switch. I've dug in the archive and i found some information. Is it still true that there are two ranges? - 0-2 low complexity mode - 3-10 high complexity mode If no input is given is 2 hence low complexity mode the default setting? Does the complexity mode affect decoding as well? Thank you Best
2010 Jun 24
2
Getting CELT to work under Windows
Hi, My name is Riccardo Micci i downloaded the CELT source code and I compiled it under Windows. This is meant to be a preliminary study for my company's project. When i run CELT it encodes and decodes the file back saying "Encoder matches decoder!!". When i try to play the output though the result is just noise and clicks. The only changes I've applied are some #defines to
2011 Mar 22
1
MAX_PERIOD
Hi, In order to fit the decoder in memory in our embedded architecture we set the MAX_PERIOD #define equal to the frame size. This doesn't affect the code bit accuracy in normal decoding. The #define it's used celt_decode_lost function though. Is it possible to get celt_decode_lost to work with a value different from the default? Thanks Riccardo Riccardo Micci Senior DSP Engineer,
2010 Nov 25
1
Celt_decode_lost function (File: P0773)
Hi, I'm using Celt version 0.7.1 in low complexity mode (i.e. no pitch information). In case a packet is lost i was planning to use the celt_decode_lost function. I realised though it uses pitch information inside. Does the celt_decode_lost function still works with no pitch information? Are there some changes to be done? Thanks Regards Riccardo Riccardo Micci Senior DSP Engineer,
2011 Mar 03
1
Bitrev for FFT
Hi, Our DSP has a built-in bitrev instruction so we're exploring the possibility of calculating the bitrev every time instead of filling the table during initialisation, hence saving some memory. Our frame size is fixed to 320 samples. The two FFTs sizes for normal block and short block are 160 and 40 respectively. It's not really clear how the function
2010 Jul 07
1
FIXED_POINT
Hi, I've recently successful built and run CELT under Windows using "testcelt.c" example file. Since I'm about to port it on a embedded platform i activated the FIXED_POINT #define. I included fixed_generic.h and without other changes to the code i tried to encode and decode the same file i previously used. The output though is completely saturated i.e. it jumps from -32768 to
2010 Sep 08
1
Celt 0.7.1 Fixed math
Hi, I'm using Celt 0.7.1 in fixed math mode. In the celt_encode function, if the variable has_pitch is true, the function pitch_search is called. Within this function the find_best_pitch subfunction is called. Here the variable "float score;" is defined. Is this right? I was expecting not to see any float declaration in the fixed math code. Is it possible to redefine it as
2010 Jul 20
1
BYTES_PER_CHAR
Hello, I'm porting CELT 0.7.1 to an embedded platform and unfortunately (at least for me) CHAR is defined as 16 bits. I now got the vocoder compiling but when i compare the encoded output with a windows build, they don't match. Among the other problems i think that the char definition is one of the biggest players. I've seen in arch.h the following definitions: /* 2 on TI C5x DSP */
2011 Mar 17
2
Error resilience
Hi, We're testing CELT (version 0.7.1) error resilience capability. We've used already celtdec packet-loss options. Hence we know what to expect in case of whole packet loss. How does Celt respond to a broken encoded packet? Is it always better to discard it and decode the missing frame through decode_lost? We have the hardware capability of protecting the frame with multiple CRCs.
2010 Jun 07
0
No subject
eds to drag in the source and header files from the libcelt directory into = the project and define HAVE_CONFIG_H is the project's pre-processor definit= ion. The tricky part is to build a config.h file. To get it to work on VS some = of the important settings include. =20 #define CELT_BUILD #define USE_ALLOCA #undef VAR_ARRAYS #undef restrict #undef HAVE_STDINT_H #undef inline #define
2010 Sep 02
1
Possible Bug
Hi, Fiddling with Celt i found a possible bug. I'm using CELT 0.7.1, frame size 256, sample rate 32k and bitrate 64k. Here is the scenario: decoding side, celt_decode function. The "dec" structure is created at each function call and it's initialized with ec_dec_init function. The attribute "end_byte" is not initialized though. Decoding a file the behaviour is the
2011 Jan 12
1
Stereo <-> Mono
Hi, Does Celt (in particular version 0.7.1) exploits correlation between the two channels in stereo mode? In practice, is it possible to use the two channels as two mono signals without affecting quality? Thanks Best Regards Riccardo Riccardo Micci Senior DSP Engineer, Wireless Group Cambridge Consultants Science Park, Milton Road Cambridge, CB4 0DW, England Switchboard: +44 (0)1223
2010 Jan 22
0
CELT 0.7.1 is out -- sort of
Hi everyone, I'd like to announce CELT 0.7.1, which improves the quality of the packet loss concealment (PLC), but does not change the rest of the codec. For this reason it is the first release NOT to break bit-stream compatibility with the previous release (0.7.0). But I promise not to do it again, the next CELT release will likely break compatibility once again. Note that the PLC
2010 Feb 25
1
Compilation for iPhone (celt 0.7.1)
Hi, In case it is of any help, to compile a static library for the iPhone, I had to add the following 2 lines in plc.c file : #include "arch.h" #include "stack_alloc.h" otherwise "celt_word16..." type are not defined and "VARDECL...." also not. Best Regards St?phane Letz
2014 Oct 17
1
Samba 4 to replicate my samba3.6 config
We are running Arch Linux as a new sever and only has samba4 available officially I am trying to migrate my samba 3 config to work with samba 4 I currently use samba to authenticate windows users to use our Linux shares. Then using the Unix groups setup in NIS to validate the users access to a particular share. Here is the problem. I can see the shares using samba 4 but it uses the
2014 Sep 05
4
[LLVMdev] HELP! Recent failure on llvm buildbot
I'm working on lldb. I've just submitted a very small change (r217229) to Triple.h/.cpp. Soon after I get a mail subject: buildbot failure in LLVM on lld-x86_64-darwin13 Details: http://lab.llvm.org:8011/builders/lld-x86_64-darwin13/builds/2571 Blamelist: mg11 My small change certainly did not cause lldb's build to fail on my machine. I looked into the build-log:
2014 Oct 16
0
Samba4 to replicate my samba3.6 config
We are running Arch Linux as a new sever and only has samba4 available officially I am trying to migrate my samba 3 config to work with samba 4 I currently use samba to authenticate windows users to use our Linux shares using the unix groups as the valid users. Here is the problem. I can see the shares using samba 4 but it uses the "Domain users" group to write to the shares and not
2013 Jul 18
0
Help understand decoding of stereo vorbis data
Hi, I'm trying to implement a vorbis decoder, and am having some trouble getting it to work with stereo vorbis data. It's giving me some PCM output which is roughly right, but it has artefacts. I think it's most likely something to do with my handling of floor decode/curve synthesis. My first thoughts are that I'm handling the submap number/floor mapping incorrectly; I'm
2014 Sep 09
2
[LLVMdev] Machine Code for different architectures
Hi, We have some DSP architectures (kalimba) which have 24-bits as their "minimum addressable unit". So this means that the sizeof a char (and an int and a short for that matter) is 24-bits. I quickly read the posted link WritingAnLLVMBackend.html but did not see an obvious answer to the following question: Is it possible to write a backend that faithfully represents these