Hi all, I am the developer of the llcon software (llcon.sf.net) which is a software making it possible for musicians to play in real-time over the internet. Up to now I have used ADPCM or no audio coding. Gregory Maxwell was pointing me to the great CELT project. Using CELT has the advantage to be able to use higher sample rates, getting lower code rate and better error concealment. I have now finished the inclusion of CELT in llcon, replacing all previous audio codecs. CELT is now the core technology in llcon. To proceed with an official llcon release, I have some questions regarding CELT: - Gregory was suggesting an implementation in CELT so that different CELT streams could be mixed up without adding additional coding delay (I require this in my llcon server). Are there any plans in that direction? If yes, when can I expect to get the code? - Since CELT is now a core component in llcon, is it allowed to use some CELT text or logo in the main window of llcon? - To be able to compile the CELT code in Visual Studio, I had to replace all "restricted" and "inline" by "__restricted" and "__inline". It would be nice if there would exist global defines for those key words so that they could easily be changed depending on the used compiler. Thank you for your support and, of course, thank you for your great CELT library, Best Regards, Volker
Hi Volker, Volker a ?crit :> I have now finished the inclusion of CELT in llcon, replacing all > previous audio codecs. CELT is now the core technology in llcon.Cool. Thanks for letting us know. Is this already available in a release? If so, I'd like to add llcon to our software list. Just as a note, it's probably a good idea to include the bitstream version number somewhere in the protocol to make sure you don't end up with problems with the CELT bit-stream changes (CELT isn't frozen yet).> To proceed with an official llcon release, I have some questions > regarding CELT: > > - Gregory was suggesting an implementation in CELT so that different > CELT streams could be mixed up without adding additional coding delay (I > require this in my llcon server). Are there any plans in that direction? > If yes, when can I expect to get the code?There are plans, but it's not that high on the todo list because it does not have bit-stream implications (i.e. it could be done at any time). However, if you (or someone else) would like to work on this, we can guide you.> - Since CELT is now a core component in llcon, is it allowed to use some > CELT text or logo in the main window of llcon?There is currently no logo for CELT, but assuming you use CELT unmodified (i.e. with no extension or breakage), then you are allowed to use the name or any logo.> - To be able to compile the CELT code in Visual Studio, I had to replace > all "restricted" and "inline" by "__restricted" and "__inline". It would > be nice if there would exist global defines for those key words so that > they could easily be changed depending on the used compiler.All you need to do is create a config.h file that does #define restrict #define inline __inline or whatever. You can have a look at the win32/config.h file in the Speex package. Cheers, Jean-Marc
Gregory Maxwell schrieb:> On Fri, Aug 21, 2009 at 1:12 AM, Gregory Maxwell<gmaxwell at gmail.com> wrote: > >> On Fri, Aug 21, 2009 at 1:07 AM, Volker<v.fischer at nt.tu-darmstadt.de> wrote: >> >>> Gregory Maxwell schrieb: >>> >>>> Hm? If you call celt_decode with a null pointer in place of the data >>>> it should fade out the audio after consecutive losses. >>>> >>>> The relevant code is around line 1286 in celt.c: >>>> for (i=0;i<C*N;i++) >>>> freq[i] = ADD32(EPSILON, MULT16_32_Q15(QCONST16(.9f,15),freq[i])); >>>> >>>> >>> If I interpret your code correctly, you use an exponential decay for the >>> fade out. In previous software projects I did something similar and got >>> strange effects when I applied a multiplication to very small floating point >>> values. I guess the same happens here, too. You should introduce a bound for >>> the floating point values. If the signal is below the bound, set the >>> floating point value to zero and the problems should disapear (I guess ;-) >>> ). >>> >> The addition of EPSILON prevents the creation of denormals and is more >> efficient than the compare and branch required for zeroizing. >> > > This makes me think however.. are you applying any gain control the > the output which might be making a quiet tone loud?I finally understand the code you posted above :-). I was a bit confused by the ADD32 operation but with your explaination it makes sense now. Putting your explaination as a comment in the code would maybe help others, too. In llcon, I do not apply any gain to the decoded audio signal. The mono signal is just copied in both stereo channels of the sound card and then played by the sound card. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/opus/attachments/20090821/205030a8/attachment-0002.htm
Hi Gregory, hi Jean-Marc, I found the cause of the problem. The bug is in the llcon softare, not in the CELT library. In case of a network error, I do not only call the CELT decoder with data=NULL but sometimes with an all zero data vector (caused by a bug in my socket buffer implementation). Thank you for your support, Best Regards, Volker>When I use CELT in the llcon software, I noticed that in the case of a >network trouble, I get a constant tone back from the CELT decoder (audio >is not muted after some audio frames as I would expect). What I do in >case of network trouble is that I use a NULL pointer instead of putting >actual coded data to the decoder to force the "error resilience" in the >decoder. > >I wonder if I am doing something wrong using the CELT library or if this >is a bug.Gregory Maxwell schrieb:> On Fri, Aug 21, 2009 at 1:12 AM, Gregory Maxwell<gmaxwell at gmail.com> wrote: > >> On Fri, Aug 21, 2009 at 1:07 AM, Volker<v.fischer at nt.tu-darmstadt.de> wrote: >> >>> Gregory Maxwell schrieb: >>> >>>> Hm? If you call celt_decode with a null pointer in place of the data >>>> it should fade out the audio after consecutive losses. >>>> >>>> The relevant code is around line 1286 in celt.c: >>>> for (i=0;i<C*N;i++) >>>> freq[i] = ADD32(EPSILON, MULT16_32_Q15(QCONST16(.9f,15),freq[i])); >>>> >>>> >>> If I interpret your code correctly, you use an exponential decay for the >>> fade out. In previous software projects I did something similar and got >>> strange effects when I applied a multiplication to very small floating point >>> values. I guess the same happens here, too. You should introduce a bound for >>> the floating point values. If the signal is below the bound, set the >>> floating point value to zero and the problems should disapear (I guess ;-) >>> ). >>> >> The addition of EPSILON prevents the creation of denormals and is more >> efficient than the compare and branch required for zeroizing. >> > > This makes me think however.. are you applying any gain control the > the output which might be making a quiet tone loud?-------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.xiph.org/pipermail/opus/attachments/20090822/0451f137/attachment-0002.htm