Displaying 20 results from an estimated 3000 matches similar to: "[LLVMdev] sincos functions"
2011 Sep 15
0
[LLVMdev] sincos functions
Hi Suresh,
> I was trying to compare the performance of icc, gcc and llvm on the
> program almabench.c in Coyote Benchmark suite. Here is a line of code
> from the program.
>
>
> da = da + (ca[np][k] * cos(arga) + sa[np][k] * sin(arga)) * 0.0000001;
>
> gcc and icc are performing way better than llvm as they are using
> 'sincos' library function to
2016 Jun 15
2
Sincos for X86_64's GNUX32 and ARM's GNUEABI/GNUEABIHF enviroments
Hi,
While writing http://reviews.llvm.org/D20916, I stumbled across some code affecting ARM and X86_64 environments that looks like it might be unintentional. I thought I should ask about it here since that patch has a '[mips]' tag and therefore might not be noticed by someone who knows.
I've noticed that the GNUX32 and GNUEABI/GNUEABIHF environments don't make use of the sincos
2013 Jan 22
2
[LLVMdev] sincos optimization
Hi,
I'm looking at http://llvm.org/bugs/show_bug.cgi?id=13204 which involves converting calls to sin and cos to sincos (when available)
Initially I thought about transforming calls to sinf/cosf to sincosf. However, I don't think this is a legal transformation given that a declaration for a function called sinf is not necessarily the standard library function.
Therefore it makes sense to
2016 Apr 04
2
RFC: A proposal for vectorizing loops with calls to math functions using SVML
Hi Sanjay,
For sincos calls, I’m currently just going through isTriviallyVectorizable(), which was good enough to get things working so that I could test the translation. I don’t see why this cannot be changed to use addVectorizableFunctionsFromVecLib(). The other functions that I’m working with are already vectorized using the loop pragma. Those include sin, cos, exp, log, and pow.
From: Sanjay
2016 Apr 01
2
RFC: A proposal for vectorizing loops with calls to math functions using SVML
RFC: A proposal for vectorizing loops with calls to math functions using SVML (short
vector math library).
=========
Overview
=========
Very simply, SVML (Intel short vector math library) functions are vector variants of
scalar math functions that take vector arguments, apply an operation to each
element, and store the result in a vector register. These vector variants can be
generated by the
2003 Nov 18
2
[LLVMdev] [Fwd: Optimization: Conclusions from Evolutionary Analysis]
I'm cross-posting the message below (from GCC list) because I believe it
would (at some point) be very beneficial to build an evolutionary
optimization pass into LLVM. The idea would be to discover the perfect
set of optimizations for a given program by trying them all and
analyzing the execution times of each. This would be somewhat like
profile driven optimization except the profile is
2013 Jan 22
0
[LLVMdev] sincos optimization
On 22/01/13 05:30, Redmond, Paul wrote:
[...]
> I'm looking at http://llvm.org/bugs/show_bug.cgi?id=13204 which involves converting calls to sin and cos to sincos (when available)
>
> Initially I thought about transforming calls to sinf/cosf to sincosf. However, I don't think this is a legal transformation given that a declaration for a function called sinf is not necessarily the
2011 Jun 17
1
[LLVMdev] Loop Unroll Factor
Devang,
I meant as an end user.
-Suresh
On Thu, Jun 16, 2011 at 11:00 PM, Devang Patel <dpatel at apple.com> wrote:
> Suresh,
>
>
> On Jun 15, 2011, at 9:13 PM, Suresh Purini wrote:
>
>> Dear all,
>>
>> What is the default loop-unroll factor in llvm? How can we specify
>> our own unroll-factor?
>
> Here "we" means end user or a
2011 Jun 16
2
[LLVMdev] Loop Unroll Factor
Dear all,
What is the default loop-unroll factor in llvm? How can we specify
our own unroll-factor?
-Suresh
2011 Sep 16
1
[LLVMdev] Problem with loop-unrolling
Hello,
When we invoke the loop-unroll pass, the compiler is crashing. From
the earlier posts in the mailing-list and from the bug reports, it is
a known problem.
Is there some one working on this bug?
-Suresh
2011 May 03
2
adaptIntegrate - how to pass additional parameters to the integrand
Hello,
I am trying to use adaptIntegrate function but I need to pass on a few
additional parameters to the integrand. However, this function seems not to
have the flexibility of passing on such additional parameters.
Am I missing something or this is a known limitation. Is there a good
alternative to such restrictions, if there at all are?
Many thanks for your time.
HC
--
View this message in
2011 Jun 19
2
[LLVMdev] Phase Interactions
Dear all,
I am doing few experiments to do understand optimization phase
interactions. Here is a brief description of my experiements.
1. I picked the list of machine independent optimizations acting on
llvm IR (those that are enabled at O3).
2. for each optimzation in the optimization-list
a) Compiled the program using 'clang -c O0 -flto program.c'
b) opt
2011 Jun 16
0
[LLVMdev] Loop Unroll Factor
Suresh,
On Jun 15, 2011, at 9:13 PM, Suresh Purini wrote:
> Dear all,
>
> What is the default loop-unroll factor in llvm? How can we specify
> our own unroll-factor?
Here "we" means end user or a compiler developer ?
The threshold is 150, see LoopUnrollPass.cpp
-
Devang
2013 Feb 09
1
[LLVMdev] Impact of an analysis pass on program run time
Hello,
I am working on finding good optimization sequences for a given program
(phase ordering problem). I have the following setup.
1) The source programs are translated into LLVM IR using -O0 + -scalarrepl.
2) Find an optimization sequence using some strategy which translates the
IR generated in the
previous step into another IR.
3) Apply llc -O2 and map the IR in to target assembly code.
2011 Sep 21
1
[LLVMdev] Fortran to llvm IR
Hello,
How can I convert Fortran Programs to llvm IR? Can I use dragonegg to
generate an llvm IR and then use rest of the llvm tool set as it is?
-Suresh
2003 Nov 18
0
[LLVMdev] [Fwd: Optimization: Conclusions from Evolutionary Analysis]
This is a hot topic in the compiler research community, but the focus
there is on
(a) choosing the right optimization sequences internally and
transparently, rather than through combinations of options,
(b) performance prediction techniques so you don't actually have to run
gazillion different choices, and perhaps can even avoid the problem of
choosing representative inputs, as you talked
2011 Jun 19
0
[LLVMdev] Phase Interactions
On 19 June 2011 14:44, Suresh Purini <suresh.purini at gmail.com> wrote:
> I am doing few experiments to do understand optimization phase
> interactions. Here is a brief description of my experiements.
>
> 1. I picked the list of machine independent optimizations acting on
> llvm IR (those that are enabled at O3).
> 2. for each optimzation in the optimization-list
>
2011 Jun 25
1
[LLVMdev] Loop Unrolling
Hello,
I tried to do some small experiments on the loop unroll
transformation. Following is the Test Program. I compiled it as
follows:
$ opt -loop-rotate -debug-only=loop-unroll -loop-unroll
-unroll-count=2 test1.o -S -o test1.s
------------------
int a[1024];
int main()
{
int i, sum=0;
for(i=0; i<1024; ++i)
sum += a[i];
printf("%d",sum);
}
-------------------
I got
2004 Sep 21
2
[LLVMdev] Compiler Benchmarks
FYI,
Yesterday's Slashdot had an article about Linux compiler benchmarks from
Coyote Gulch (Scott Ladd). In this update he compares GCC and ICC. You
can read the article here:
http://www.coyotegulch.com/reviews/linux_compilers/
Of particular note was his use of SciMark 2.0 which is a NIST developed
benchmark for scientific computing. Its available in both java and C and
computes a MFLOPS
2011 Jul 06
1
[LLVMdev] Optimization Order at O2/O3
Dear all,
Is there a command line argument which prints the order of
application of various analysis/transformation passes on a program
using clang?
-Suresh