Displaying 20 results from an estimated 900 matches similar to: "Threading issues with CELT?"
2009 May 09
2
Another paper on CELT
Hi,
For those interested in reading more on CELT, here's another paper that
just got accepted. That one focuses on the low-complexity mode and is
based on 0.5.1.
J.-M. Valin, T. B. Terriberry, G. Maxwell, A Full-Bandwidth Audio Codec
with Low Complexity and Very Low Delay, Accepted for EUSIPCO 2009.
http://people.xiph.org/~jm/papers/celt_eusipco2009.pdf
Cheers,
Jean-Marc
2008 Nov 24
6
adding celt support to netjack some questions.
hi.
i am currently adding celt support to netjack.
very nice to see a free low-latency codec :)
i currently dont require robustness against packet loss,
because the sync code of netjack does not handle packet loss very
gracefully. how much bandwidth is wasted for this feature ?
is it sensible, to have the data downsampled berfore encoding , in
order to reduce bandwidth ? i suspect that just
2010 Mar 09
1
Using CELT on iPod
Hi,
We are testing CELT 0.7.1 on an iPod touch and it seems we are already reaching the CPU limit of the machine for a 1 channel both directions stream. The CELT README says that the code should be compiled with fixed-point support, but it it not clear how it has to be used later on. We currently use celt_encode_float/celt_decode_float functions. Can we still use them with the fixed point version
2010 May 29
2
[LLVMdev] Vectorized LLVM IR
>
> <32 x float> takes up 8 SSE registers; you're likely running into
> issues with register pressure. Does it work better if you use
> something smaller like <4 x float>?
>
> Besides that, I don't see any obvious issues.
>
> -Eli
You are right yes. The code works faster with <4 x float> types, with still works a bit slower than the scalar
2013 Jul 16
4
[LLVMdev] General strategy to optimize LLVM IR
Hi,
Our DSL emit sub-optimal LLVM IR that we optimize later on (LLVM IR ==> LLVM IR) before dynamically compiling it with the JIT. We would like to simply follow what clang/clang++ does when compiling with -O1/-O2/-O3 options. Our strategy up to now what to look at the opt.cpp code and take part of it in order to implement our optimization code.
It appears to be rather difficult to follow
2013 Jul 18
2
[LLVMdev] LLVM 3.3 JIT code speed
Hi,
Our DSL LLVM IR emitted code (optimized with -O3 kind of IR ==> IR passes) runs slower when executed with the LLVM 3.3 JIT, compared to what we had with LLVM 3.1. What could be the reason?
I tried to play with TargetOptions without any success…
Here is the kind of code we use to allocate the JIT:
EngineBuilder builder(fResult->fModule);
2013 Jul 16
0
[LLVMdev] General strategy to optimize LLVM IR
On Tue, Jul 16, 2013 at 8:16 AM, Stéphane Letz <letz at grame.fr> wrote:
> Hi,
>
> Our DSL emit sub-optimal LLVM IR that we optimize later on (LLVM IR ==> LLVM IR) before dynamically compiling it with the JIT. We would like to simply follow what clang/clang++ does when compiling with -O1/-O2/-O3 options. Our strategy up to now what to look at the opt.cpp code and take part of it
2013 Jul 05
2
[LLVMdev] Enabling vectorization with LLVM 3.3 for a DSL emitting LLVM IR
Le 5 juil. 2013 à 17:23, Arnold Schwaighofer <aschwaighofer at apple.com> a écrit :
>
> On Jul 5, 2013, at 9:50 AM, Stéphane Letz <letz at grame.fr> wrote:
>
>>
>> Le 5 juil. 2013 à 04:11, Tobias Grosser <tobias at grosser.es> a écrit :
>>
>>> On 07/04/2013 01:39 PM, Stéphane Letz wrote:
>>>> Hi,
>>>>
2013 Jul 05
2
[LLVMdev] Enabling vectorization with LLVM 3.3 for a DSL emitting LLVM IR
Le 5 juil. 2013 à 04:11, Tobias Grosser <tobias at grosser.es> a écrit :
> On 07/04/2013 01:39 PM, Stéphane Letz wrote:
>> Hi,
>>
>> Our DSL can generate C or directly generate LLVM IR. With LLVM 3.3, we can vectorize the C produced code using clang with -O3, or clang with -O1 then opt -O3 -vectorize-loops. But the same program generating LLVM IR version cannot be
2010 May 28
3
[LLVMdev] Vectorized LLVM IR
Hi,
We are experimenting directly generating vectorized LLVM IR (using <8 x float> kind of types), then compiling the code to SSE on a 64 bits machine. Right now the equivalent code in scalar mode sill outperform the SSE one.
What is the quality of the SSE support in X86 LLVL backend? Are they any specific things to be aware of to improve the speed?
Thanks
Stéphane Letz
2010 Jun 03
1
[LLVMdev] Generating Floating point constants
> ------------------------------
>
> Message: 4
> Date: Wed, 2 Jun 2010 11:07:39 -0700
> From: Dale Johannesen <dalej at apple.com>
> Subject: Re: [LLVMdev] Generating Floating point constants
> To: St?phane Letz <letz at free.fr>
> Cc: llvmdev at cs.uiuc.edu
> Message-ID: <AEC895CC-E887-4329-8743-FA606BD401F6 at apple.com>
> Content-Type:
2010 May 29
0
[LLVMdev] Vectorized LLVM IR
On Sat, May 29, 2010 at 1:23 AM, Stéphane Letz <letz at grame.fr> wrote:
>>
>> <32 x float> takes up 8 SSE registers; you're likely running into
>> issues with register pressure. Does it work better if you use
>> something smaller like <4 x float>?
>>
>> Besides that, I don't see any obvious issues.
>>
>> -Eli
>
>
2010 Feb 25
1
Compilation for iPhone (celt 0.7.1)
Hi,
In case it is of any help, to compile a static library for the iPhone, I had to add the following 2 lines in plc.c file :
#include "arch.h"
#include "stack_alloc.h"
otherwise "celt_word16..." type are not defined and "VARDECL...." also not.
Best Regards
St?phane Letz
2010 May 29
3
[LLVMdev] Vectorized LLVM IR
Le 29 mai 2010 à 01:08, Bill Wendling a écrit :
> Hi Stéphane,
>
> The SSE support is the LLVM backend is fine. What is the code that's generated? Do you have some short examples of where LLVM doesn't do as well as the equivalent scalar code?
>
> -bw
>
> On May 28, 2010, at 12:13 PM, Stéphane Letz wrote:
We are actually testing LLVM for the Faust language
2013 Jul 18
0
[LLVMdev] LLVM 3.3 JIT code speed
On Thu, Jul 18, 2013 at 9:07 AM, Stéphane Letz <letz at grame.fr> wrote:
> Hi,
>
> Our DSL LLVM IR emitted code (optimized with -O3 kind of IR ==> IR passes) runs slower when executed with the LLVM 3.3 JIT, compared to what we had with LLVM 3.1. What could be the reason?
>
> I tried to play with TargetOptions without any success…
>
> Here is the kind of code we use to
2007 Jun 15
2
[LLVMdev] Strategy to compile for LLVM IR
Hi,
We have a compiler for the Faust language (faust.grame.fr) that
currently compiles a C++ class which implements a DSP plug-in with
several methods.
Our strategy to compile LLVM IR instead is the following:
- use the current Faust ==> C++ compiler to compile a "empty" plug-in
that we use as a template C++ class.
- compile this template C++ class using "llvm-g++
2007 Jul 04
1
[LLVMdev] "LLVM backend for Faust" web page
Hi,
We have a web page on our "LLVM backend for Faust" project available
here: http://www.grame.fr/~letz/faust_llvm.html.
Best Regards
Stephane Letz
2013 Jul 05
0
[LLVMdev] Enabling vectorization with LLVM 3.3 for a DSL emitting LLVM IR
On Jul 5, 2013, at 10:43 AM, Stéphane Letz <letz at grame.fr> wrote
>
> 1) "entry" block is the first block of the function right?
Yes.
>
> 2) do you mean *all* "alloca" in a function always have to be in the fist entry block?
If you want them converted into ssa variables early on, yes.
2013 Jul 05
1
[LLVMdev] Enabling vectorization with LLVM 3.3 for a DSL emitting LLVM IR
Le 5 juil. 2013 à 17:48, Arnold Schwaighofer <aschwaighofer at apple.com> a écrit :
>
> On Jul 5, 2013, at 10:43 AM, Stéphane Letz <letz at grame.fr> wrote
>>
>> 1) "entry" block is the first block of the function right?
>
> Yes.
OK
>
>>
>> 2) do you mean *all* "alloca" in a function always have to be in the fist entry
2013 Jul 18
2
[LLVMdev] LLVM 3.3 JIT code speed
Le 18 juil. 2013 à 19:07, Eli Friedman <eli.friedman at gmail.com> a écrit :
> On Thu, Jul 18, 2013 at 9:07 AM, Stéphane Letz <letz at grame.fr> wrote:
>> Hi,
>>
>> Our DSL LLVM IR emitted code (optimized with -O3 kind of IR ==> IR passes) runs slower when executed with the LLVM 3.3 JIT, compared to what we had with LLVM 3.1. What could be the reason?
>>